text
stringlengths 8.19k
1.23M
| summary
stringlengths 342
12.7k
|
---|---|
You are an expert at summarizing long articles. Proceed to summarize the following text:
When a disaster occurs, staff from SBA’s Disaster Loan Program, the Federal Emergency Management Agency (FEMA), and other government agencies work together to assess the damage to the affected area and aid household and business disaster victims. Either the President or the SBA Administrator may issue a disaster declaration. When the President issues a disaster declaration, FEMA specifies the immediate disaster area and SBA determines which contiguous counties are eligible for federal assistance. When SBA issues a disaster declaration, it specifies the immediate disaster area and any contiguous counties that are eligible for assistance. Unlike FEMA, which can provide some grants to residents in the area of a disaster, SBA provides loans to households and businesses affected by disasters. Once a disaster is declared, officials from one of SBA’s four Disaster Area Offices—located in California, Georgia, New York, and Texas—arrive at the disaster site and begin assisting victims. These officials provide information about the disaster loan process, distribute loan applications, and assist victims, if requested, in completing applications. In response to the September 11 terrorist attacks, SBA disaster program officials from around the country provided assistance to the New York Disaster Area Office, which was the office primarily responsible for providing assistance. Depending on the nature of the disaster, SBA can provide businesses hurt by a disaster with fixed-rate, low-interest loans to address physical property damage and economic injuries. These low-interest loans are subsidized by taxpayers through federal appropriations, and if the loans are not repaid, the subsidy cost for disaster loans increases. SBA provides loans to cover physical damage to both small and large businesses, enabling them to repair or replace damaged real property, machinery, equipment, fixtures, and inventory to begin restoring the property to its pre-disaster condition. SBA provides EIDLs only to eligible small businesses, allowing them to meet necessary financial obligations that could have been met if the disaster had not occurred and to maintain necessary working capital during the period that business activities are affected by the disaster. For most disasters, SBA has primarily assisted businesses with physical disaster loans. However, given the nationwide economic impact resulting from the terrorist attacks of September 11, 2001, EIDLs became SBA’s primary form of assistance. Of the approximately 24,000 September 11 disaster loan applications, SBA approved about 11,000, totaling $1.1 billion. Over 10,000 of these loans, amounting to $1 billion, were for EIDLs. Under its statutory authority to provide economic injury disaster loans to small businesses, SBA has established policies and procedures for determining whether an applicant qualifies for a loan and the likely viability of the loan, using pre-disaster financial information from the applicant. SBA loan officers determine whether applicants meet agency criteria. SBA loan officers may determine that applicants do not meet these criteria for one or more of the following reasons: lack of repayment ability; unsatisfactory history on an existing or previous SBA loan; unsatisfactory history on a federal obligation, such as taxes; unsatisfactory debt payment history; economic injury not substantiated; business activity not eligible; not a small business; credit available elsewhere (for instance, from a commercial financial recovery available from other sources, such as an insurance settlement; failure to maintain required flood insurance on an SBA loan; not a qualified business; refusal to pledge collateral; no direct link established between the business downturn and the disaster (for September 11 EIDLs only); and outstanding judgment for a federal debt. When SBA does not receive all the required information or documentation from an applicant, it withdraws the loan application. SBA also withdraws applications when the Internal Revenue Service (IRS) has no record that the applicant has filed income tax returns for 1 or more years or because of an unresolved character issue (for example, an applicant’s criminal activity). Additionally, an applicant may request that SBA withdraw its application. After SBA declines or withdraws an application, the applicant has 6 months to request reconsideration. SBA explains its reason(s) for not approving the loan and the process for reapplying in correspondence to the applicants. In addition to SBA, several nonprofit organizations (nonprofits) in New York City offered economic relief to small businesses in the area affected by the events of September 11. The nonprofits that we contacted to discuss their September 11 programs typically provide economic and technical support to small, entrepreneurial, and nontraditional businesses such as street vendors and taxi drivers, in New York City and generally receive funding from private and public sources. Funding for their September 11 programs came from these sources as well as from federal grants allocated to support such programs. All three nonprofits received grants from the September 11th Fund and raised additional capital with the help of private banks and partner organizations. Two of the three nonprofits reported that they provided both grants and loans to small businesses, but all provided working capital loans to help businesses meet short-term obligations such as rents, salaries, and accounts payable. These working capital loans were expected to help businesses weather expected recovery periods of between 3 and 6 months. One nonprofit offered only low-interest working capital loans of up to $150,000, while another reported providing $900,000 in grants and $3.1 million in low-interest loans. The third nonprofit reported providing $7.1 million in grants and no-interest loans, $12.4 million in low-interest loans, and $4 million in wage subsidies. Small businesses in New York were also assisted by $3.5 billion in Community Development Block Grant funding appropriated by Congress. Congress earmarked at least $500 million of this funding to compensate small businesses, nonprofits, and individuals for their economic losses. This assistance included grants to compensate small businesses for some of their losses, as well as payments to attract and retain small businesses in an effort to revitalize the affected areas. SBA’s policies and procedures for providing EIDLs are consistent with the Small Business Act. The agency’s policies and procedures are consistent with the two requirements specific to EIDLs. These requirements are that applicants must have suffered a substantial economic injury as a result of a covered disaster and that SBA must find that the applicant is not able to obtain credit elsewhere. The act addresses some loan terms, such as length of maturity, but it does not specify underwriting criteria for SBA to follow. However, SBA’s regulations do contain underwriting requirements such as assessing an applicant’s ability to repay the loan, credit history, and the availability of collateral, as well as other requirements. The law provides for SBA to make loans to small business concerns that have suffered a substantial economic injury as a result of a covered disaster, provided that SBA finds that an applicant is not able to obtain credit elsewhere. Although the law does not define substantial economic injury for EIDLs, SBA’s regulations define it as economic harm to a business concern that results in its inability to meet its obligations as they mature or to pay its ordinary and necessary operating expenses. SBA may provide an EIDL if it determines that an applicant has suffered a substantial economic injury resulting from a disaster described in the act. For EIDLs, the act describes four disaster scenarios: (i) a major disaster, declared by the President of the United States; (ii) a natural disaster, as determined by the Secretary of Agriculture; (iii) a disaster declared by SBA; and (iv) if no disaster was declared under scenarios (i) through (iii), certification to SBA by the Governor of a State that eligible concerns have suffered economic injury as a result of a disaster and are in need of financial assistance which is not available on reasonable terms in the stricken area. Although the act specifies some terms for EIDLs, it does not specify underwriting requirements. For example, the law states that the loans should not exceed $1,500,000, unless the applicant is a major source of employment in the impacted area or have more than a 30-year maturity period. It also provides some specific interest rate requirements, based on the year of the disaster. However, it does not specify underwriting criteria for EIDLs. The act does not specify that EIDLs should be of sound value or secured to provide reasonable assurance of repayment, as it does for SBA’s general business loans. Additionally, the act does not specifically address the issue of collateral for EIDLs, whereas it specifies that SBA not require collateral for physical disaster business loans in the amount of $10,000 or less. SBA’s regulations for EIDLs contain underwriting criteria that require, among other things, a reasonable assurance of repayment, satisfactory credit and character, and generally, collateral. The regulations state that SBA must have reasonable assurance that all disaster loan applicants can repay their loans from their personal or business cash flow. The regulations also state that SBA is prohibited from lending to businesses with an associate who is incarcerated, on probation, on parole, or who has been indicted for a felony or a crime of moral turpitude. The regulations do not elaborate on satisfactory credit, however, as discussed later, SBA’s policies and procedures address these issues. For EIDLs greater than $5,000, SBA regulations require that applicants provide collateral, although SBA will not decline a loan if the applicant lacks a particular amount of collateral, as long as it has reasonable assurance that the applicant can repay the loan. However, SBA may decline or cancel a loan where the applicant refuses to pledge available collateral when requested by SBA. SBA regulations also specify eligibility requirements for the types of businesses that may obtain an EIDL. The regulations exclude the following types of small businesses: businesses engaged in lending, speculation, or investment; consumer or marketing cooperatives; businesses deriving more than one-third of gross annual revenue from loan packagers that earn more than one-third of gross annual revenue from packaging SBA loans; businesses principally engaged in teaching, counseling or indoctrinating religion or religious beliefs; and businesses primarily engaged in political or lobbying activities. SBA amended its regulations in October 2001, expanding eligibility to small businesses outside the declared disaster area, applicable only to September 11 EIDLs. SBA made this change in recognition that the September 11 disaster had a widespread economic impact, beyond the boundaries of the declared disaster areas in New York and Virginia. Under the new section of the regulations, SBA agreed to provide EIDLs to businesses outside of the declared disaster area if they could show that they suffered a substantial economic injury as a direct result of the destruction at the World Trade Center or the damage to the Pentagon, or any related federal actions (such as the suspension of air travel) taken between September 11, 2001, and October 22, 2001. The regulations specify that loss of anticipated profits or a drop in sales is not considered substantial economic injury for purposes of an EIDL under these provisions. Other than this change to expand EIDL eligibility nationwide, SBA’s general regulatory requirements for disaster loans, which we discuss more fully later in this report, applied to September 11 EIDLs. SBA’s underwriting policies and criteria for September 11 EIDLs generally followed established guidelines, even with the exceptions that were made for this disaster, and were similar to those of selected nonprofits in New York City. Small businesses that were eligible to apply for SBA loans were expected to meet standard requirements for documentation, creditworthiness, repayment ability, collateral, and appropriate character, as determined by SBA. We found that SBA’s lending activities followed best practices for private lending, as set out by industry experts. As we reported previously, modifications to SBA’s Disaster Loan Program were made to address the unusual circumstances surrounding the September 11 disaster and to respond to the concerns of affected small businesses. However, the changes that were made were to eligibility and terms, not to loan underwriting criteria. Finally, the three nonprofits that we reviewed had requirements that were similar to SBA’s for documentation, creditworthiness, and repayment ability, but their requirements differed for collateral and appropriate character. SBA used the same requirements for September 11 EIDLs as it would for any other disaster. In accordance with the guidelines of the Disaster Loan Program, SBA required small business applicants to provide the following: personal financial statements for all principals with at least 20 percent interest in the business and each general partner; business tax records for the 3 most recent tax years and 1 year of balance sheets and operating statements dated within 90 days of monthly sales figures beginning 3 years before the disaster and continuing through the most current month available. Applicants were also required to undergo the standard credit analysis required for the EIDL program. Since EIDLs are available only to small businesses unable to obtain credit elsewhere, SBA administers its own test to determine whether applicants are able to qualify for private funds under reasonable terms and conditions, or if the applicant has the financial capacity to recover without federal assistance. September 11 loan applications were processed using SBA’s “credit elsewhere” test, a combination of two formulas that looks at cash flow for debt servicing and available net worth. Loan officers then used information provided in the credit reports, balance sheets, and tax records to determine repayment ability, based primarily on pre-disaster financial performance. SBA required that EIDLs of more than $5,000 be secured by personal guaranties from all business principals and by the “best available collateral.” SBA officials stated that the best available collateral typically would be business or personal real estate, since real estate is the only asset that will likely maintain its value over the life of a 30-year SBA loan. In some cases, SBA accepted other business property as collateral for smaller September 11 EIDLs, if it was the best available, according to SBA officials. Finally, Disaster Loan Program guidelines require that SBA make a character determination on all loan applicants in order to determine eligibility for federal loans. By statute, SBA is required to deny loans to persons convicted during the past year of a felony committed during a riot or civil disorder and in connection with another declared disaster. SBA uses specific program guidelines to make a character determination on each loan applicant who has a prior arrest, indictment, or conviction or is on parole or probation. Applicants are required to provide information on any previous arrests or convictions. SBA’s guidelines for its Disaster Loan Program generally coincide with best practices published by lending industry experts and guidance issued by federal regulators. As stated previously, modifications that were made specifically for the September 11 disaster did not affect the administration of the program or underwriting criteria for EIDLs made to small businesses nationwide. Disaster Loan Program requirements include specific and clearly stated criteria and processes for analyzing credit and determining repayment ability. Operating procedures for the program also detail internal control and supervisory review directives. According to experts, “a cornerstone of safe and sound banking is the design and implementation of written policies and procedures related to identifying, measuring, monitoring, and controlling credit risk. Such policies should be clearly defined, consistent with prudent banking practices and relevant regulatory requirements, and adequate for the nature and complexity of the bank’s activities.” Further, in order to be effective, credit policies must be communicated throughout the organization, implemented through appropriate procedures, and monitored and periodically revised to take into account changing internal and external circumstances. We compared SBA’s policies and procedures with industry best practices and regulatory guidance for extending credit. SBA’s policies and procedures for its Disaster Loan Program in general and EIDLs in particular are presented in SBA’s standard operating procedures and related program memoranda. Underwriting criteria are clearly defined, with specific formulas for SBA’s loan officers to use in evaluating credit risk for each loan applicant. Industry standards also specify the importance of a comprehensive analysis of a borrower’ s ability to repay the loan and requiring a borrower to pledge collateral. SBA’s requirements for loan guaranties and collateral and its analysis of applicants’ cash flow to determine repayment ability are in line with industry guidance on mitigating lender risk in individual credit transactions. Modifications that were made to eligibility and terms for September 11 EIDLs were communicated throughout the agency in program memoranda. SBA provided applications for the expanded program nationwide through its resource partners. Our review of September 11 loan files also indicated that SBA complied with its procedures for supervisory review of all loan decisions. With the Defense Appropriations Act of 2002, Congress approved notable modifications for this disaster that changed the terms for September 11 EIDLs for small businesses. These included increasing the maximum loan limit from $1.5 million to $10 million and raising the maximum repayment deferral period. SBA’s policy of a 4-month deferral period was increased to 2 years, by legislation, for businesses in the immediate areas of the disaster. EIDLs granted in the immediate areas of the disaster also did not accrue interest during the 2-year deferral period. By regulation, borrowers in the immediate disaster areas receiving economic injury loans also had 2 years from the date of approval to apply for additional funds or a modified loan, and borrowers in the expanded area had 1 year. Borrowers would thus have sufficient time to assess additional disaster-related damage that had not been detected or reported at the time of the initial application. Under its regulatory authority, SBA expanded eligibility for the September 11 disaster to businesses nationwide that were directly affected by the terrorist attacks and subsequent federal actions such as airport closures that resulted in business disruptions across the country. The expanded program also addressed the needs of small businesses that depended on other businesses and industries whose operations were suspended or disrupted because of the disaster. Businesses in the expanded areas were required to provide an economic injury statement. Applicants needed to make a direct link between the economic downturn affecting their business and the events of the disaster in order to qualify for loans under the expanded program. SBA made other accommodations for September 11 applicants, including increasing the size limits for eligible businesses, expediting loan processing, and providing translators to help non-English speaking applicants. As we noted in a previous report, small business owners had complained to Congress about some facets of the Disaster Loan Program. These complaints prompted SBA and the Congress to modify the program. First, because of the immediate and devastating effect on the travel industry nationwide, SBA increased the business size standards for travel agencies and certain other travel-related businesses. Applications that were pending or had been previously declined or withdrawn solely on the basis of the size of the business were automatically reconsidered, and SBA adjusted the size determination date to the application acceptance date instead of the date of the disaster. For travel agencies and other travel businesses, the size standard was increased from $1 million to $3 million in annual receipts, allowing larger businesses to qualify. Second, in an effort to improve efficiency in processing the large number of EIDL requests for September 11, particularly under the expanded program, SBA developed an expedited process for reviewing loan applications. Under the expedited process, applicants that did not qualify based on eligibility criteria or pre- disaster credit and repayment issues were declined early in the review process. Loan officers were required to inform these applicants about the abbreviated process, and applicants could ask to be reconsidered and could submit additional documentation to justify their request. According to SBA officials, expedited processing also allowed it to provide quick loan approval to businesses within the declared disaster area in operation at the time of application up to a maximum amount of $200,000 and those that were not in operation because of the events of September 11, up to $350,000. Expedited processing allowed businesses outside of the declared disaster area meeting certain basic requirements to receive quick approval for loan amounts up to $50,000. Third, in direct response to complaints from small business owners in New York City with limited proficiency in English, SBA made efforts to provide loan application documents in languages other than English, including Spanish and Asian languages, and to provide multilingual personnel at New York City application centers. One SBA small business development center representative told us that although this initiative was positive, interpreters who were not familiar with business and financial jargon still faced limitations in communicating adequately with some small business owners. Three nonprofits in New York City that made September 11 disaster loans had requirements similar to SBA’s, but the programs had some additional flexibility to address the needs of their small business constituents (fig. 1). One of the nonprofits reported ineligibility for SBA loans or not meeting SBA’s requirements as one of its own criteria for application acceptance. Another reported that its program was geared, in particular, toward small businesses that had not qualified for significant loans from SBA or other recovery loan programs. The existence of these nonprofit lenders provided alternative economic injury assistance to small businesses in New York City. Like SBA, the nonprofits we spoke with had requirements for documentation, creditworthiness, and repayment ability. All three nonprofits required that applicants provide business financial statements, business and personal tax records, credit reports, and a number of other documents. One nonprofit requested, among other documents, corporate bank statements, a business plan, insurance statements, and receipts and invoices for expenses related to September 11. The same nonprofit required that applicants commit to remaining in New York City and asked for a current executed commercial lease. Another nonprofit said that commitment to rebuilding in the area was a factor in the decisionmaking process but did not include this factor in its eligibility requirements. Like SBA, the nonprofits used credit reports and business financial statements to determine an applicant’s level of past debt, management of past credit, and likelihood of repaying the disaster loan. All of the nonprofits reported that credit and repayment histories played an important role in the decisionmaking process, but two of the nonprofits emphasized that applicants were not declined solely on the basis of the information provided in credit reports. One nonprofit considered the direct impact of the disaster on a business’s ability to manage its recent credit, and another reported that it made allowances for special circumstances such as illness and divorce if applicants provided documentation and could show a pattern of good faith efforts to address delinquencies. Unlike SBA, all of the nonprofits had limited requirements for collateral and reported that collateral was only requested on a case-by-case basis. One nonprofit reported that collateral was not required, but was accepted in lieu of a guaranty or cosigner for applicants who had been approved with less than satisfactory pre-disaster credit. In such cases, collateral would be accepted, even if it was not enough to secure the entire loan and would be considered “psychological collateral.” Another nonprofit reported that collateral was typically required when a business had a limited operating history or highly unpredictable and inconsistent cash flow, and offered unsecured loans up to $250,000. The third nonprofit reported that business collateral was required on a case-by-case basis but provided no further details. Two of the nonprofits indicated that they required personal guaranties, with one specifying that owners with 20 percent or more interest in the business would need to provide some guarantee. The third nonprofit indicated that it also determined whether to ask for personal guaranties on a case-by-case basis. None of the nonprofits had a requirement similar to SBA’s for appropriate character for their September 11 programs. One of the nonprofits indicated that an applicant’s character was called into question if a written or verbal account was inconsistent with the documentation provided. In our review of SBA’s September 11 EIDL application files, we found that SBA followed its own policies and procedures in determining whether to provide loans to prospective borrowers. Our review of a representative random sample of applications SBA declined or withdrew showed that all of the 99 files contained the documentation and analysis needed to support the determination. We also found that SBA followed its procedures for processing loan applications, such as conducting supervisory reviews of loan decisions, and made its determinations and notified applicants in a timely manner. Our review of a small random sample of approved loans also indicated that SBA followed its policies and procedures in granting loans. In all of the 99 loan files we reviewed in our representative random sample, SBA correctly declined 70 and withdrew 29 of the applications. Overall, SBA declined September 11 loan applications primarily because it determined that the applicants were unlikely to be able to repay the loan. While SBA can cite several reasons for declining a loan, it gave lack of ability to repay as at least one of the reasons for declining 38 of the 70 declined loan applications that we reviewed. In these cases, SBA concluded that the applicants’ income was insufficient to repay a disaster loan, given existing debts and expenses, based on the analysis that loan officers conducted using financial information provided by the applicant. Our analysis of the universe of September 11 EIDLs revealed that SBA declined 4,513 applications, or more than half of all declined applications for lack of repayment ability. For the 34 declined Expanded EIDL applications in our sample, SBA declined 18 applications, or about half, because the applicants failed to establish a direct link between their business downturn and the events of September 11 or related federal actions, as SBA required of applicants outside of the declared disaster areas. In the universe of Expanded EIDLs, SBA declined 4,186 applications and 1,975 were declined for this reason. For example, a small business in an airport that lost revenue during the period in which air travel was suspended would have been eligible for an SBA September 11 Expanded EIDL. However, a business that simply showed losses after September 11 would not be eligible for a loan. SBA withdrew loan files primarily because the applicants had not filed federal income tax returns. Of the 29 withdrawn loan applications in our sample, SBA withdrew 16 for this reason. Following its usual procedures, SBA requested the most recent 3 years of business tax records and 1 year of personal tax records directly from IRS. According to a senior SBA official, SBA has a special arrangement with IRS for obtaining federal tax documentation for disaster loan applicants. IRS dedicates staff to processing these requests, and the IRS staff work the same hours as the SBA loan officers in order to provide the needed information as the loans are processed. IRS provides SBA with transcripts of available returns or a notification that no records could be found. SBA withdraws an application if IRS has no record of the applicant’s tax return for at least a year and will also generally withdraw an application when missing or incomplete information prevents the loan officer from making a determination. Figure 2 provides additional information on the reasons SBA declined and withdrew loan applications in our sample. Our analysis of the universe of September 11 EIDLs revealed that SBA withdrew 1,294 applications, or about 38 percent of all withdrawn applications, because IRS had no record of tax returns for the applicants for 1 or more years. We found technical errors in 2 of the 99 files we reviewed, although the facts presented in the application files showed that SBA would not have granted the loan, even if the errors had not been made. SBA declined one application because the applicant owed federal income taxes and lacked repayment ability, even though the applicant was a nonprofit and therefore ineligible for an EIDL. SBA notified another applicant that it was declining the application for policy reasons because the applicant was a subsidiary of a foreign company and had no revenues in the United States. According to an SBA official, SBA’s policies and procedures suggest that the application could have also been declined for lack of repayment ability. In our review of declined and withdrawn loan files, we found that SBA followed its policies and procedures for conducting supervisory reviews of loan decisions and notifying applicants of the decisions and that the agency generally processed applications in a timely manner. In all of the 99 declined and withdrawn files that we reviewed, an SBA supervisory loan officer signed the loan officer’s report, which documents how the loan officers came to the decision on the application. On many of the loan officer’s reports, the supervisory loan officer made some notations assessing the loan officer’s analysis of the application. Additionally, all of the files contained correspondence to the applicant documenting SBA’s decision that clearly described SBA’s reasons for declining or withdrawing the application, the deficiencies in the application and additional documentation required (if applicable), and the applicant’s right to have the application reconsidered. We also found that SBA generally processed the loan applications in a timely manner, as defined in SBA procedures. At the time SBA processed the September 11 loans, its benchmark was to process loan applications within 21 days. For most of the files that we reviewed, SBA made a decision within 14 days of the application date (fig. 3). Our analysis of the universe of all September 11 EIDLs found that SBA processed declined files in an average of 11 days and withdrew files in an average of 13 days. Based on our review of a small sample of loan files, SBA also followed its own policies and procedures in approving September 11 disaster loans. However, this sample was not representative and cannot be projected to the universe of September 11 EIDLs. In our review of 27 approved loan files, we found that they contained all of the financial documentation and underwriting analysis required to approve the loans, according to SBA’s policies and procedures. However, we did find an error in one of the approved loan files. In this case, an applicant had stated on his application that he was the sole proprietor of his business and not a U. S. citizen. Under these circumstances, SBA was supposed to request that the applicant provide proof that he was a non-citizen national or qualified alien. Based on evidence in the file, the applicant had not provided proof of his alien status. As with our review of the declined and withdrawn files, the approved loans all showed evidence of supervisory review. Of the 27 approved loan files we reviewed, SBA had initially declined or withdrawn eight. In these eight files, applicants had deficiencies similar to those of the declined or withdrawn loan files we reviewed but were able to address the deficiencies and reapply. For example, the applicants whose files had been withdrawn because of income tax issues reapplied after filing and paying federal income taxes, allowing SBA to approve the loans. In one of the approved loan files we reviewed, SBA withdrew the application for failure to file for federal income taxes. After the applicant filed federal tax returns, SBA then declined the application for lack of repayment ability and unsatisfactory history on a federal obligation, or failure to pay federal income taxes. After setting up a payment plan with the IRS and reducing expenses, the applicant reapplied and SBA approved the loan. In another approved loan file, SBA initially declined the application because the applicant had not substantiated the economic injury. Based on SBA’s analysis of the applicant’s documentation, the business would be able to meet its financial obligations without a loan. However, the applicant provided further documentation to show that it had lost contracts because of the September 11 disaster and that the loss of business would have a negative effect on the firm over time. The additional documentation allowed SBA to approve the loan. Although SBA is not required to maintain specific underwriting criteria for its Disaster Loan Program under the provisions of the Small Business Act, we think that SBA’s policies are generally consistent with good lending policies as reflected in industry best practices and regulatory guidance, and, when properly applied, should help maintain the integrity of the program. SBA’s underwriting procedures evaluate applicants’ credit risk and analyze their ability to repay the loan. These procedures, along with requiring collateral to secure the loans, help ensure that SBA fulfills its mission in providing loans that will assist small businesses in recovering from disasters. By assessing repayment ability, SBA can more effectively use its resources to assist small businesses that are more likely to be able to repay the loan, thus limiting the loan program’s cost to the government, and therefore the taxpayer. We provided a draft of this report to SBA and received written comments from the Associate Administrator For Disaster Assistance. SBA’s letter is reprinted in appendix II. SBA agreed with the findings presented in this report. In addition, SBA provided technical comments, which we incorporated into this report as appropriate. We will provide this report to appropriate congressional committees. In addition, this report will be available at no charge on our web site at http://www.gao.gov. Please contact me at (202) 512-8678 or [email protected] or Katie Harris, Assistant Director at (202) 512-8415 or [email protected] if you or your staff have any questions about this report. Key contributors to this report were Bernice Benta, Gwenetta Blackwell-Greer, Diane Brooks, Jackie Garza, Fred Jimenez, and Carl Ramirez. To determine whether Economic Injury Disaster Loan (EIDL) program policies are consistent with the law and overall mission of SBA’s Disaster Loan Program, we reviewed the Small Business Act and SBA’s related regulations. We determined what the provisions of the law require of SBA in its operation of the program as well as SBA’s regulations and operating procedures. We discussed our views on the laws, regulations, and operating procedures with appropriate SBA officials. To compare SBA’s underwriting policies and criteria for September 11 EIDLs with nonprofit lenders active in New York City after the disaster, we reviewed SBA’s policies and criteria for approving, declining, and withdrawing disaster loans, and amendments made after September 11. We also compared SBA’s underwriting requirements with industry best practices and banking regulators’ guidance for managing credit risk during the lending process. We spoke with officials of nonprofit organizations (nonprofits) that provided loans to small businesses in New York City after September 11, and reviewed their underwriting policies and criteria. We requested specific information on their loan programs to answer questions regarding (1) eligibility requirements for each nonprofits’ program, (2) type of documentation that was required to accompany a loan application, (3) actual limits and terms associated with available loans, and (4) factors that each nonprofit considered in making the decision to approve or decline an application. We reviewed this information within each of the four categories and compared it with SBA’s EIDL policies and criteria applicable to post-September 11 lending. To determine whether SBA correctly applied its policies in the disposition of September 11 EIDL applications, we reviewed a representative random sample of declined and withdrawn September 11 EIDL application files across all disaster area offices, and a small sample of loan application files for approved September 11 EIDLs. We developed a data collection instrument containing key factors we identified in SBA’s standard operating procedures and reviewed each loan application file to determine whether there was evidence that the appropriate policies and criteria had been applied in determining the disposition of each application. The representative sample of declined and withdrawn files allowed us to project to the universe of about 12,000 declined and withdrawn EIDLs. The small sample of approved loans did not allow us to project to the universe of all approved loans, and we discuss the disposition only of the files that we reviewed. We sampled from the original population of all 24,041 September 11 disaster loan applications. We selected a probability sample using a design that was stratified by SBA’s four disaster area offices and whether or not the loan application was declined or withdrawn. We also selected a smaller simple random sample from among all of the accepted loan applications, as a check to see how the loan files differed from those withdrawn or declined. We assessed the reliability of SBA’s database, the Automated Loan Control System, and found it acceptable for our purposes. Additional details about our sampling methodology follow. The sampling unit was the paper copy of a loan application file. The sample sizes were estimated at the 95 percent level of confidence for a desired precision of 6 percent. The sample size was estimated using a formula appropriate for estimating an attribute in a stratified design. Within the universe, some older application files had already been shredded. Under SBA’s procedures, declined and withdrawn files that have been inactive for 2 years may be shredded. To account for this, the sample size was increased slightly within each of the eight strata, in case one of these files appeared in the random sample. However, none of the files in our sample had been shredded—SBA was able to provide us with all of the files we requested. Between the eight strata, a sample size of 103 was proportionally allocated and then selected. The strata allocation and final disposition of the sample are shown in Table 1. In our loan application file reviews, we found that 4 of the 103 loan application files sampled were physical injury disaster loans files, not EIDL applications, and we excluded them as out of our study’s scope. In the entire original population of 13,171 declined or withdrawn applications, we found 808 corresponding out of scope records. In addition, there were 223 files that SBA indicated had been shredded. Therefore, the final study population that we analyzed and to which our data collection instrument sample is projected is 12,140. Our confidence in the precision of the results from this sample is expressed in 95-percent confidence intervals. The 95-percent confidence intervals are expected to include the actual results for 95 percent of the samples of this type. We calculated confidence intervals for our study results using methods that are appropriate for probability samples of this type. For all of the percentages presented in this report, we are 95-percent confident that the results would have obtained, had we studied the entire population, are within plus or minus 6 or fewer percentage points of our results, unless otherwise noted. We located and reviewed all 99 declined or withdrawn sampled files. We also reviewed 27 of the 30 approved loans in our nonprobability sample. SBA reported that three of the files in our sample were not readily available because the loans had been paid in full by the borrower, and the loan files had been placed in storage. To ensure accuracy of our file reviews, two GAO analysts reviewed each of the loan files. Based on the reviews of documentation in the files, we entered information into an automated data collection instrument. We also conducted basic checks on the programming and analysis of the file review data. We conducted our work in Atlanta, GA; New York, NY; and Washington, D.C., between May 2003 and June 2004 in accordance with generally accepted government auditing standards. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.” | The Small Business Administration (SBA) played a key role in assisting small businesses affected by the September 11, 2001 terrorist attacks by providing over $1 billion in disaster loans to businesses that sustained physical damage or economic injury. Small businesses in the immediate areas of the attacks and others nationwide that suffered related economic injury were eligible to apply for disaster loans. SBA declined or withdrew about half of these loan applications. SBA's disaster loans are direct federal government loans provided at a subsidized interest rate. In response to concerns that more small businesses impacted by September 11 could have benefited from SBA's disaster loans, GAO conducted a review of its Disaster Loan Program. Specifically, GAO addressed the following questions: (1) Are the disaster program policies consistent with the law and the overall mission of SBA's Disaster Loan Program? (2) What were SBA's underwriting policies and criteria for September 11 Economic Injury Disaster Loans (EIDL) and how did they compare with those applied by nonprofit lenders that were active in New York City after September 11? (3) Did SBA correctly apply its policies and procedures in its disposition of September 11 EIDLs? SBA's policies and procedures for providing EIDLs are consistent with the Small Business Act: applicants must have suffered substantial economic injury as a result of a declared disaster, and SBA must determine that they are not able to obtain credit elsewhere. The act addresses some loan terms, such as length of maturity, but it does not specify underwriting criteria for SBA to follow. However, SBA's regulations contain underwriting criteria such as assessing an applicant's ability to repay the loan and obtaining collateral. SBA's underwriting requirements for September 11 EIDLs generally followed program guidelines and were similar to those of selected nonprofit organizations in New York City. Small businesses that were eligible to apply for SBA assistance were expected to meet standard requirements for documentation, creditworthiness, repayment ability, collateral, and character. These requirements are generally consistent with best practices published by lending industry experts and guidance issued by federal regulators. Changes made to address the unusual circumstances of the September 11 disaster were to eligibility and loan terms and not to loan underwriting criteria. The three nonprofit organizations in New York City that made September 11 disaster loans had requirements similar to SBA, but the nonprofits had some additional flexibility to address the needs of their small business constituents. GAO found that SBA followed its policies and procedures in making decisions for September 11 EIDLs. All of the files in our random, representative sample of declined or withdrawn applications contained documentation and analysis to support the SBA's determination. GAO's review of this sample also indicated that SBA followed its procedures for processing applications--such as supervisory review and notifying applicants of its decision and their right to have the application reconsidered. GAO's review of a small sample of approved loans also indicated that SBA followed its policies and procedures. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The National Cemeteries Act of 1973 (P.L. 93-43) authorized NCS to bury eligible veterans and their family members in national cemeteries. NCS operates and maintains 114 national cemeteries located in 38 states and Puerto Rico. In fiscal year 1996, NCS performed about 72,000 interments and maintained more than two million burial sites and over 5,600 acres of land developed for interment purposes. NCS offers veterans and their eligible family members the options of casket interment and interment of cremated remains in the ground (at most cemeteries) or in columbarium niches (at nine cemeteries). NCS determines the number and type of burial options available at each of its national cemeteries. The standard size of casket grave sites, the most common burial choice, is 5 feet by 10 feet, and the grave sites are prepared to accommodate two caskets stacked one on top of the other. A standard in-ground cremains site is 3 feet by 3 feet and can generally accommodate one or two urns. The standard columbarium niche used in national cemeteries is 10 inches wide, 15 inches high, and 20 inches deep. Niches are generally arrayed side by side, four units high, and can hold two or three urns, depending on urn size. Figure 1 shows a columbarium and in-ground cremains sites at national cemeteries. Armed forces members who die while on active duty and certain veterans are eligible for burial in a national cemetery. Eligible veterans must have been discharged or separated from active duty under other than dishonorable conditions and have completed the required period of service. People entitled to retired pay as a result of 20 years’ creditable service with a reserve component of the armed services are also eligible. U.S. citizens who have served in the armed forces of a government allied with the United States in a war may also be eligible. The benefit of burial in a national cemetery is further extended to spouses and minor children of eligible veterans and of active duty members of the armed forces. A surviving spouse of an eligible veteran who later marries a nonveteran, and whose remarriage is terminated by death or divorce, is also eligible for burial in a national cemetery. Burial in a VA cemetery includes, at no cost to the veteran, one grave site for the burial of all eligible family members. Also included are the opening and closing of the grave, perpetual care of the site, and a government headstone or marker and grave liner. Veterans’ families are required to pay for services provided by funeral directors and additional inscriptions on the headstone or marker. Generally grave sites may not be reserved; space is assigned at the time of need on the basis of availability. In addition to burying eligible veterans and their families, NCS manages three related programs: (1) the Headstones and Markers Program, which provides headstones and markers for the graves of eligible people in national, state, and private cemeteries; (2) the Presidential Memorial Certificates Program, which provides certificates to the families of deceased veterans recognizing their contributions and service to the nation; and (3) the State Cemetery Grants Program, which provides aid to states in establishing, expanding, or improving state veterans’ cemeteries. In 1978, Public Law 95-476 authorized NCS to administer the State Cemetery Grants Program, under which states receive financial assistance to provide burial space for veterans and eligible dependents. State veterans’ cemeteries supplement the burial service provided by NCS. The cemeteries are operated and permanently maintained by the states. A grant may not exceed 50 percent of the total value of the land and the cost of improvements. The remaining amount must be contributed by the state. The State Cemetery Grants Program has funded the establishment of 28 veterans’ cemeteries, including three cemeteries currently under development, located in 21 states, Saipan, and Guam. The program has also provided grants to state veterans’ cemeteries for expansion and improvement efforts. While VA strongly encourages states to adopt the eligibility criteria applied to national cemeteries, states have been allowed to establish eligibility criteria for interments that differ from VA-established criteria, but only if their criteria are more restrictive than those established for national cemeteries. In other words, state veterans’ cemeteries cannot be used for the interment of people who are not eligible for burial in a national cemetery. Most states have a residency requirement, and some states restrict eligibility to veterans who were honorably discharged, had wartime service, or both. As the veteran population ages, NCS projects the demand for burial benefits to increase. NCS has a strategic plan for addressing the demand for veterans’ burials up to fiscal year 2000, but the plan does not tie its strategic and performance goals to external factors such as veterans’ mortality rates and preferences for burial options—that is, caskets, in-ground cremains, or columbaria niches. In addition, NCS’ strategic plan does not address long-term burial needs—that is, the demand for benefits during the expected peak years of veteran deaths, when pressure on the system will be greatest. Beyond the year 2000, NCS officials said they will continue using the basic strategies contained in the current 5-year plan. With the aging of the veteran population, veteran deaths continue to increase each year. For example, NCS projects annual veteran deaths will increase about 20 percent between 1995 and 2010, from 513,000 to 615,000, as shown in figure 2. Moreover, NCS projects that veteran deaths will peak at about 620,000 in 2008. The demand for veterans’ burial benefits is also expected to increase. For example, NCS projects annual interments will increase about 42 percent between 1995 and 2010, from 73,000 to 104,000. NCS projects that annual interments will peak at about 107,000 in 2008. According to its 5-year strategic plan (1996-2000), one of NCS’ primary goals is to ensure that burial in a national or state veterans’ cemetery is an option for all eligible veterans and their family members. The plan sets forth four specific strategies for achieving this goal. First, NCS plans to establish, when feasible, new national cemeteries. NCS is currently establishing five new national cemeteries, which are in various stages of development, and projects that all will be operational by 2000. A second strategy for addressing veterans’ burial demand is to develop available space for cremated remains. NCS plans to survey national cemeteries to determine what space is available for use as in-ground cremains sites, construct additional columbaria at eight existing cemeteries, and include columbaria at the five new cemeteries. Third, NCS plans to acquire land through purchase or donation. NCS plans to use this land to extend the burial capacity and service period of national cemeteries currently projected to run out of available grave sites. Fourth, NCS plans to encourage states to provide additional burial sites for veterans through participation in the State Cemetery Grants Program. According to the plan, NCS plans to identify and prioritize those states most in need of a veterans’ cemetery; design a marketing strategy for those states; visit a minimum of four of those states annually until all prioritized states have been visited; and participate in the state conferences of at least three veterans’ service organizations (for example, the American Legion and the Veterans of Foreign Wars) each year. In addition to the strategic and performance goals, the plan also discusses assumptions, such as veterans’ demographics (the projected increases in veteran deaths and interments), and external factors, such as resource constraints, that could delay achievement of the plan’s performance goals. However, the plan does not tie the strategic and performance goals to its assumptions. For example, while the plan includes some data on demographic trends in the veteran population, it does not explain how these data were used in setting strategic goals, or how they will be used to measure progress in achieving these goals. Neither does the plan tie its strategic and performance goals to external factors—such as preferences for VA, state, or private cemeteries and preferences for casket, in-ground cremains, or columbaria niche burial—that will affect the need for additional VA and state cemetery capacity. NCS tracks actual burial practices in national cemeteries, monitors trends in the private cemetery sector, and in 1992 surveyed veterans to determine their preferences for type of cemetery (national, state, or private) and burial option (casket or cremation burial). Despite NCS plans to ensure that burial in a national or state veterans’ cemetery is an available option, officials acknowledge that large numbers of veterans currently do not have access to a veterans’ cemetery within a reasonable distance of their place of residence. For example, NCS estimates that of the approximately 26 million veterans in 1996, about 9 million (35 percent) did not have reasonable access to a national or state veterans’ cemetery. According to NCS officials, most underserved areas are major metropolitan regions with a high concentration of veterans. With the completion of the five new cemeteries, NCS officials estimate that the percentage of veterans who will have reasonable access to a veterans’ cemetery will increase from about 65 percent in fiscal year 1996 to about 77 percent in fiscal year 2000. Although NCS has a 5-year strategic plan for addressing veterans’ burial demand during fiscal years 1996 through 2000, it is unclear how NCS plans to address the demand beyond 2000. For example, NCS has not developed a strategic plan to address veterans’ burial demand during the peak years of veteran deaths, when pressure on the system will be greatest. According to NCS’ Chief of Planning, although its strategic plan does not address long-term burial needs, NCS is always looking for opportunities to acquire land to extend the service period of national cemeteries. For example, NCS is working to acquire land for one of its west coast cemeteries that is not scheduled to run out of casket sites until the year 2011. Also, to help address long-range issues, NCS compiles key information, such as mortality rates, number of projected interments and cemetery closures, locations most in need of veterans’ cemeteries, and cemetery-specific burial layout plans. In addition, the planning chief pointed out that the Government Performance and Results Act requires a strategic plan to cover only a 5-year period. However, the Results Act allows an agency to extend its strategic plan beyond a 5-year period to address future goals. Although NCS’ strategic plan notes that annual veteran deaths are expected to increase about 20 percent between 1995 and 2010, the plan does not indicate how the agency will begin to position itself to handle this increase in demand for burial benefits. A longer planning period would provide the opportunity to develop strategies for obtaining funds, acquiring land, assessing veterans’ preferences, or all three. While NCS does not have a formal strategic plan to address veterans’ burial demand beyond the year 2000, NCS officials said they will continue using the basic strategies contained in the current 5-year plan. For example, NCS plans to enhance its relationship with states to establish state veterans’ cemeteries through the State Cemetery Grants Program. According to NCS’ Chief of Planning, NCS will encourage states to locate cemeteries in areas where it does not plan to operate and maintain national cemeteries. Since the State Cemetery Grants Program’s inception in 1978, fewer than half of the states have established veterans’ cemeteries primarily because, according to NCS officials, states must provide up to half of the funds needed to establish, expand, or improve a cemetery, as well as pay for all equipment and annual operating costs. Furthermore, the Director of the State Cemetery Grants Program told us that few states, especially those with large veteran populations, have shown interest in legislation that VA proposed in its 1998 budget submission in order to increase state participation. This legislation would increase the federal share of construction costs from 50 to 100 percent and permit federal funding for up to 100 percent of initial equipment costs. In fact, according to the Director, state veterans’ affairs officials said that they would rather have funding for operating costs than for construction. In addition, VA does not plan to request construction funds for more than the five new cemeteries, which will be completed by the year 2000, because of its commitment to deficit reduction. Officials said that even with the new cemeteries, interment in a national or state veterans’ cemetery will not be “readily accessible” to all eligible veterans and their family members. According to NCS officials, most underserved areas will be major metropolitan areas with high concentrations of veterans, such as Atlanta, Georgia; Detroit, Michigan; and Miami, Florida. As demand for burial benefits increases, cemeteries become filled, thus reducing the burial options available to veterans and their families. We developed a model to analyze the relative costs of three types of cemeteries. The analysis showed that over 30 years, the traditional casket cemetery would be the most expensive interment option. Our analysis also showed that there would be no significant difference in the costs of columbarium and in-ground cremains cemeteries. Although the development and construction costs are higher for a columbarium cemetery, operating costs are higher for an in-ground cremains cemetery. Table 1 compares the 30-year costs of these three types of cemeteries. (See app. II for a detailed cost comparison of the three types of cemeteries.) A cemetery providing only casket burials would be the most expensive interment option, costing, on average, over twice as much as columbarium or in-ground cremains cemeteries. We estimated that over a 30-year period, the casket cemetery would cost over $50 million, compared with about $21 to $23 million for either of the two cremation cemeteries. The difference in costs is due primarily to the higher land development and operations/maintenance costs of a casket cemetery. Specifically, providing 50,000 grave sites for 30 years would require developing about 115 acres at a cost of $8.4 million, compared with 34 acres for an in-ground cremains cemetery and 14 acres for a columbarium cemetery, costing about $2.5 million and $1 million, respectively. Over 30 years, the total operations and maintenance cost for a casket cemetery is three times as much as that for a columbarium cemetery and over twice as much as that for an in-ground cremains cemetery. As table 1 shows, providing burial services and maintenance activities for a 115-acre casket cemetery would result in higher nonlabor and labor costs. For example, it requires about 39 full-time staff to operate and maintain a casket cemetery, compared with about 21 full-time staff for an in-ground cremains cemetery and 14 full-time staff for a columbarium cemetery. Over 30 years, it would cost about the same to plan, design, construct, operate, and maintain a columbarium and an in-ground cremains cemetery with 50,000 burial spaces: $23 and $21 million, respectively. The development and construction cost is higher for a columbarium cemetery, but its operations and maintenance cost is lower than that of an in-ground cremains cemetery. As table 1 shows, over 30 years the development and construction cost for a columbarium cemetery would be, on average, about three times as much as that for an in-ground cremains cemetery. This difference in costs is primarily due to the cost of building the columbarium structure. The operations and maintenance cost of an in-ground cremains cemetery is almost twice as much as that of a columbarium cemetery. This cost difference can be attributed to the fact that columbarium cemeteries have fewer acres to maintain, resulting in lower nonlabor and labor costs. As existing national cemeteries reach their capacity, columbarium burial offers the most efficient option for extending cemetery service periods. We developed a model to analyze the cost of three interment options on the basis of the cost of developing a total of 1 acre of land, composed of parcels of land not contiguous to each other, in a cemetery nearing exhaustion of available casket grave sites. The analysis showed that the average burial cost would be lowest and the service delivery period the longest using columbarium interment. The analysis also showed that the average cost per burial would be about the same for columbarium niches as for in-ground cremains sites. However, columbarium interment would extend the service period by about 50 years, while in-ground cremains interment would extend the service period about 3 years and casket burials, about half a year. Casket burials would be the most expensive per burial and would have the shortest service period. At the end of fiscal year 1996, 57 of VA’s 114 national cemeteries had exhausted their supply of casket grave sites available to first family members, as shown in figure 3. Of these 57 cemeteries, 38 could accommodate casket burial of subsequent family members and interment of cremated remains of both first and subsequent family members. Nineteen could accommodate only subsequent family members—for either casket or cremated remains interment. According to NCS’ Chief of Planning, unless NCS acquires additional land, it projects that 15 cemeteries will totally deplete their inventory of casket grave sites for first family members by 2010, and another 16 cemeteries will do so by 2020. In total, by 2020, NCS projects that 88 of the 119 national cemeteries (74 percent) will no longer be able to accommodate casket burials of first family members. As less burial space is available, columbarium burial offers the most efficient interment option for extending the service period of existing cemeteries. Our analysis of the costs of three interment options, based on the development of 1 remaining acre of land, pieces of which were not contiguous to each other, showed that the average burial cost would be lowest using columbarium interment. For example, the average columbarium interment cost would be about $280, compared with about $345 for in-ground cremains burial and about $655 for casket burial, as shown in figure 4. Our analysis also showed that the service delivery period would be extended the most using the columbarium. For example, a total of 1 acre of land could accommodate about 87,000 columbarium niches and could extend the service delivery period for over 52 years, compared with about 3 years for about 4,800 in-ground cremains sites and about 1/2 year for about 870 casket sites, as shown in figure 5. Although NCS officials acknowledge that the columbarium option could extend the service delivery period of existing cemeteries, they said that it has been used to do so at only one national cemetery, which is located on the west coast. Furthermore, at the end of fiscal year 1996, only 9 of the 114 national cemeteries offered interment in a columbarium, while the majority of cemeteries provided casket and in-ground cremains sites. According to NCS officials, NCS has not made greater use of columbaria primarily because of their substantial up-front construction costs. Officials said they generally develop casket and in-ground cremains sites first because they believe the initial costs are less. However, our analysis showed that the total cost per burial would be lower for a columbarium because of its low operations and maintenance costs. Columbaria would be particularly useful in metropolitan areas where interment rates are high; past or projected cremation demand is significant; land is scarce, expensive, or both; and no state veterans’ cemetery exists to compensate for the lack of available national cemetery grave sites. For example, at one midwestern cemetery, NCS plans to add about 8,000 casket sites, but no cremation sites, to its last acres. With the additional casket sites, the cemetery is projected to deplete all burial spaces about the time veteran deaths peak, and no state veterans’ cemetery exists to compensate for the lack of burial spaces. However, by incorporating columbaria into 1/2 acre of land, this cemetery could continue to provide a burial option to thousands of additional veterans, who otherwise would have no burial option available to them within a reasonable distance of their homes, and keep the cemetery open well beyond the peak years. While historical data imply that the majority of veterans and eligible dependents prefer a casket burial, NCS national data show that the demand for cremation at national cemeteries is increasing. For example, while about 70 percent of veterans prefer a casket burial, veterans choosing cremation increased from about 20 percent of the veteran population in 1990 to nearly 30 percent in 1996, and NCS officials expect demand for cremation to continue to increase in the future. At cemeteries offering both types of interments, the ratio of casket to cremation interments varies significantly. For example, cremation accounts for over 40 percent of interments at some cemeteries and less than 5 percent at others. In addition, according to cemetery directors, veterans choosing cremation do not strongly prefer either in-ground burial or interment in a columbarium niche. The incidence of cremation also continues to increase in the general population. For example, cremation was chosen for about 14 percent of nationwide burials in 1985 and about 21 percent in 1995. The Cremation Association of North America (CANA) projects that cremations will account for about 40 percent of all burials by 2010. Like other interment options, cremation is an individual’s decision and is subject to influences such as culture, religion, geographic area of the country, and age and generational preferences. According to CANA, people choose cremation primarily because it is perceived as less expensive and simpler than traditional casket burial, it uses less land, and it offers more options for memorialization. Long-range planning is crucial to addressing veterans’ burial needs during the peak years and beyond. Although NCS has a 5-year strategic plan, it does not address veterans’ burial needs beyond the year 2000, when the demand for burial benefits will be greatest. Specifically, while the World War II veteran population is entering its peak years of need, many national cemeteries are depleting their inventory of available casket grave sites. As a result, additional burial sites are needed to help meet future burial demand. In some cases, state veterans’ cemeteries could reduce the negative impact of the loss of available casket spaces from a national cemetery. However, it does not appear that state veterans’ cemeteries will be able to accommodate all veterans seeking interment. Therefore, NCS needs to rely more on extending the service periods of its existing national cemeteries. Columbaria can more efficiently utilize available cemetery land at a lower average burial cost than the other interment options and can also extend the service period of existing national cemeteries. Using columbaria also adds to veterans’ choice of services and recognizes current burial trends. Although cremation will not be the preferred burial option for all veterans, identifying veterans’ burial preferences would enable NCS to better manage limited cemetery resources and more efficiently meet veterans’ burial needs. To better serve the American veteran, we recommend that the Secretary of Veterans’ Affairs instruct the director of the National Cemetery System to extend its strategic plan to address veterans’ long-term burial demand during the peak years of 2005 to 2010; collect and use information on veterans’ burial preferences to better plan for future burial needs; and identify opportunities to construct columbaria in existing cemeteries, for the purpose of increasing burial capacity and extending the cemeteries’ service periods. In commenting on a draft of this report, the Director of NCS stated that our recommendations appeared valid and represented the vision and performance of NCS in meeting the burial needs of veterans. He also said that NCS is currently executing many of the practices recommended by our report. For example, the NCS Director concurred with our recommendation that NCS develop plans to address veterans’ long-term burial demand during the peak years and stated that NCS is already performing long-term planning, as evidenced by numerous strategies and activities. We recognize that NCS has developed valuable information from such sources as the Management and Decision Support System and cemetery master plans to help it address long-range issues, but even with this information, NCS is unable to specify the extent to which veterans will have access to a national or state veterans’ cemetery during the peak years. NCS’ estimates of the percentage of veterans who will have access to a veterans’ cemetery stop at the year 2000. NCS needs to develop a strategic plan that links information such as mortality rates and the number of projected interments and cemetery closures, obtained from various sources, to its strategic goals, performance measures, and mitigation plans over the next 15 years. For example, one of NCS’ goals is to ensure that a burial option is available to all eligible veterans. Although NCS’ current strategic plan estimates a 20-percent increase in annual veteran deaths between 1995 and 2010, it does not indicate how NCS will begin to position itself to handle this increase in demand for burial benefits. Because of the lead time required to acquire land and develop some types of interment spaces, NCS needs to develop strategies that address such issues as (1) how many burial spaces will be needed at each cemetery to accommodate the projected demand for burial benefits during the peak years; (2) how NCS will acquire the additional burial spaces—for example, by purchasing adjacent land or maximizing existing land by using columbaria; and (3) when and how NCS will obtain funds, acquire land, and assess veteran preferences. In addition, while one of NCS’ strategies for meeting the projected burial demand includes encouraging states to build cemeteries, the Director of the State Cemetery Grants Program told us that few states, especially those with large veteran populations—such as New York, Florida, Texas, Ohio, and Michigan—would be swayed by proposed legislation that would increase the federal share of construction and equipment costs. NCS officials also acknowledged that their ability to persuade states to participate in the program is limited, because the states must take the initiative to request grant funds. We revised our previous recommendation to encourage NCS to extend its strategic plan to address veterans’ long-term burial demand during the peak years of 2005 to 2010. The NCS Director also concurred with our recommendation to collect and use information on veterans’ burial preferences to better plan for future burial needs. While the Director stated that NCS carefully tracks actual burial practices in national cemeteries and monitors trends in the private cemetery sector, and that these indexes offer a reliable method of planning for the future, he said that additional data on veterans’ preferences would assist NCS in its planning efforts. Therefore, he stated that NCS will include questions pertaining to personal burial preferences in the next VA National Survey of Veterans. Finally, the Director of NCS concurred with our recommendation to identify opportunities to construct columbaria in existing cemeteries for the purpose of increasing burial capacity and extending the service delivery period of these cemeteries. He asserted that NCS is already accomplishing what our recommendation was intended to achieve in that it (1) plans to add columbaria at eight existing cemeteries and five new cemeteries and (2) annually considers all sites that may warrant the establishment of columbarium units. We acknowledge, as stated in our report, that NCS plans to add columbaria at 8 of the 114 existing national cemeteries and include columbaria in its 5 new cemeteries. However, the intent of our recommendation was to encourage VA to identify opportunities to construct columbaria in cemeteries that are nearing depletion of casket grave sites for first family members or have already run out. This will involve at least 72 cemeteries by 2010. Although NCS acknowledges that columbaria could extend service at a cemetery that would otherwise be closed to veteran use, they have only been used for this purpose at one national cemetery. While the NCS Director stated in his comments that NCS considers the anticipated ratio of casket burial to cremains burial when planning for the future, during our review, NCS officials stated that they primarily use historical usage data. For example, at one cemetery, NCS planned to allocate more than 30 percent of the burial spaces for cremation sites, although the cremation rate for the state in which the cemetery was located was more than 50 percent in 1995, and projected to increase to more than 60 percent in 2000 and to about 80 percent in 2010. As our report states, by including other factors in the decision process, such as projected cremation demand, availability and cost of land, and availability of grave sites at state veterans’ cemeteries, officials may identify additional national cemeteries that warrant the establishment of columbaria. NCS also provided technical comments in an attached white paper. Comments 1 through 3 repeat points made in the letter. Comments 4 and 5 question the results of our analysis of the cost of extending the service period of existing cemeteries, since it was based on the maximum number of burial sites available in an acre of land. Specifically, NCS commented that it may not be feasible to devote a single 1-acre plot entirely to columbarium niches because using the “absolute maximum” would not allow space between structures. However, in our analysis we did not envision a single 1-acre plot. Rather, we assumed several parcels of land dispersed around the cemetery that totaled 1 acre of available burial space. Accordingly, we have revised our discussion to clarify this issue. Comment 6 questions our assumption that first family member interments would be evenly spaced over 30 years for all three modes of burial. Specifically, NCS suggests an analysis in which the annual interment rates are assumed to differ for the three alternatives (casket, in-ground cremains, and columbarium burials), reflecting current use patterns. However, our objective was to perform a cost comparison. For a valid cost comparison, the alternatives being compared must be evaluated in terms of the same outcome—in this case, to inter a given number of eligible veterans and their dependents according to a given schedule. The specific assumption we adopted—evenly spaced first family member interments for all alternatives—was previously suggested to us by NCS, and our analysis is similar to the one NCS used in its 1996 study. The type of analysis that NCS is now suggesting is outside the scope of our work. NCS offered other technical comments, which we incorporated where appropriate. NCS’ comments are included in their entirety in appendix III. We are sending copies of this report to the Secretary of Veterans Affairs and other interested parties. This work was performed under the direction of Irene Chu, Assistant Director. If you or your staff have questions about this report, please contact Ms. Chu or me on (202) 512-7101. Other major contributors to this report are listed in appendix IV. In this appendix we discuss the methodology, data sources, and principal assumptions that we used to characterize the relative long-term cost of each of three modes of interment: casket, in-ground cremains, and columbarium; project the outlays that would be required to construct and operate a cemetery that offers each of these modes of interment over a period of 30 years or more; and estimate the cost of these three types of interment on the basis of the development of a total of 1 acre of land composed of parcels of land not contiguous to each other in a cemetery nearing depletion of available burial sites. Our analysis builds on a study that the National Cemetery System (NCS) performed at the request of the Chairman, Subcommittee on Compensation, Pension, Insurance and Memorial Affairs, in February 1996. In that study, the Department of Veterans Affairs (VA) presented an analysis of the relative costs of casket and columbarium burial over a 20-year period. For the purpose of this report, we have updated and extended the NCS analysis, most notably by adding in-ground cremains burial as a third alternative, as requested by the analyzing costs over 30 years or more, thus recognizing that cost differences among the modes of interment will persist far into the future; analyzing the relative long-term costs of the three alternatives in the context of using available space in existing cemeteries, as well as in the context of developing new cemeteries; and using the present value method to evaluate the relative long-term costs of the three alternatives. Simple comparisons of cumulative outlays for the several modes of interment (casket, in-ground cremains, and columbarium) would provide a misleading picture of the relative costs of the respective options because the modes differ in the relative share of total cost that is incurred in the first years. Moreover, a dollar paid by the government today is more costly than a dollar paid at some future date, because it increases the burden of making interest payments on the national debt. It is standard practice among policy analysts to compare different payment streams by calculating the present value (also known as the lump-sum equivalent) of each stream. We developed two models. The first model was used to estimate the long-term cost of alternative burial modes in a new cemetery. The second model was used to estimate the long-term cost of alternative uses of available space in an existing cemetery. Each model consisted of three basic components: simulating the sequence of events whereby a cemetery is opened and burial sites are developed, placed into service, and maintained; attaching estimated costs to each of these events, so as to create a trajectory of costs over the whole time period; and calculating the present values of cost streams associated with each of the options being evaluated. Assumptions and Data We developed the assumptions and specified the data to be collected in consultation with NCS experts. Except as noted below, NCS officials supplied the data. We did not verify all of the data. What follows is, first, a description of the elements of the model for the analysis of the costs of a new cemetery designed for 50,000 burial sites, with burials to take place over a 30-year period. Second, we describe how we modified the data and assumptions for the second model, which analyzes the cost of adding to an existing cemetery. Land acquisition. We assumed that all land acquisition and development of architectural master plans and environmental impact statements would occur in the first year. Development of burial sites. NCS officials told us that burial sites would be developed in three phases, each of which would result in one-third (about 16,700) of the total number of burial sites. The first phase would occur in the second and third years. The second phase would occur in the eleventh through thirteenth years. The third phase would take place in the twenty-first through twenty-third years. Each of the three phases would involve outlays for design, land development, and equipment acquisition (see below). The construction of buildings would occur during the first two phases. First family member interments. Per NCS guidance, we assumed that first family member interments would commence in the fourth year and that they would be evenly spaced over the next 30 years (that is, there would be 1,667 first family member interments per year). Subsequent interments. We used the assumption, supplied by NCS officials, that subsequent interments would initially make up 2 percent of first family member interments and would increase linearly over time, so that in the thirtieth year (that is, the thirty-third year of the period of analysis), subsequent interments would make up 60 percent of first interments. These costs include the cost of site acquisition, site development (conducting environmental impact assessments, obtaining architect/engineer design services, and developing land), and construction of buildings (administration and maintenance facilities). Site acquisition. According to NCS officials, land in the vicinity of the Tahoma National Cemetery costs $10,000 per acre. They told us that a cemetery exclusively devoted to casket burial would require 114.8 acres, of which 57.4 acres would be used for grave sites and 57.4 acres for infrastructure (parking lots, driveways, buildings, landscaping, and so on). A cemetery devoted exclusively to in-ground cremains burial would require 34.3 acres (10.3 acres for burial sites and 24.0 acres for infrastructure). An all-columbarium cemetery would require 14.25 acres (0.57 acre for columbaria and 13.68 acres for infrastructure). Site development. The estimated cost for the environmental assessment aspect of site development is $100,000 for a casket cemetery, $17,150 for an in-ground cremains cemetery, and $7,250 for a columbarium cemetery. These estimates reflect NCS’ experiences with similar projects in the past. The architect/engineer design cost category covers such services as carrying out a topographic survey, an archeological exploration, and traffic impact studies. The cost of architect/engineer design services is assumed to be proportional to construction costs (land development plus buildings). The estimated cost of these services for phase 1 is $545,414 for the casket alternative, $246,249 for in-ground cremains sites, and $862,233 for columbaria. For phases 2 and 3, costs would be lower. Land development costs include site preparation (for example, grading; landscaping; and providing irrigation, roads, storm drainage, and utilities) and purchasing site furnishings (for example, benches and flagpoles). The estimated cost of land development is $102,298 per acre for all modes of interment. Thus, land development costs for the three alternatives are proportional to their respective acreage requirements, discussed above. Under each alternative, one-third of the total acreage would be developed in each of the three phases (years 2 through 3, 12 through 13, and 22 through 23). For a casket cemetery, outlays would amount to $3.91 million in each phase. For an in-ground cremains cemetery, the estimated cost is $1.17 million per phase. For a columbarium cemetery, the estimated cost is $0.49 million per phase. Construction of buildings. Buildings that would be constructed in phase 1 include a public information building, an administration building, a maintenance building, a vehicle storage building, and two committal service shelters. An additional committal service shelter would be constructed in phase 2. The three alternatives have different requirements for the size of the maintenance and vehicle storage buildings. Columbaria niches would be constructed in each phase, giving this mode the highest total construction cost. These costs include (1) the cost of purchasing initial and subsequent equipment; (2) salary and benefits for personnel to handle administration and interment issues (drafting contracts and correspondence; handling public inquiries, ceremonies, and outreach; scheduling burial services; opening/closing grave sites or niches; interring casket or cremated remains; setting headstones or placing markers; and restoring burial sections); (3) the cost of purchasing nonlabor items (fertilizer, seeds, headstones, markers, and grave liners); and (4) the cost of maintenance activities (keeping the grounds and facilities). Equipment. VA provided estimates of the equipment costs for the three modes. The initial costs were $736,674 for caskets, $443,003 for in-ground cremains sites, and $91,664 for columbaria—all purchased in year 3 of the first phase. Subsequent equipment purchases were assumed to be equal and to occur in year 3 of phases 2 and 3. We estimated their cost at $150,000 for caskets, $90,000 for in-ground cremains sites, and $18,000 for columbaria. Labor associated with administration and interments. We assumed that it would require 7.3 full-time-equivalent (FTE) general schedule (GS) employees, at an annual rate (pay and benefits) of $45,216 each, plus 6.7 FTE wage grade (WG) employees at a rate of $35,085 each, to conduct the 1,667 interments that are projected for each year under all three burial modes. VA said that the GS administrative and interment requirements would be the same for all three modes but that the WG labor associated with each mode would vary. According to NCS assumptions, the WG labor required for casket burials was 6.7 FTEs. We had to develop our own estimate—3 FTEs for in-ground cremains sites and .56 FTE for columbarium niches—because VA had no specified ratio for WG labor for the noncasket modes. We assumed subsequent interments would require a prorated amount of labor. That is, if subsequent interments in a given year are estimated to be 20 percent of first interments, we assumed that labor costs associated with subsequent interments would be equal to 20 percent of the labor costs associated with first interments. Put differently, we assumed that each subsequent interment would require as much labor as each first interment. Nonlabor costs. These costs include the costs of irrigating and purchasing fertilizer, seed, and other supplies. We used VA estimates to derive amounts for this category of costs. The amounts are small and proportional to the acreage developed. For the casket model, the nonlabor costs would be $389,000 in phase 1, increasing by $95,500 in phases 2 and 3 to a total of $580,000 by the 24th year. For in-ground cremains sites, we adjusted the cost in phase 1 by the ratio of acreage to arrive at a cost of $117,000, rising by $28,500 in phases 2 and 3 to a total of $174,000 in the 24th year (with rounding). For columbaria, the initial nonlabor cost was $57,000, rising $14,000 in phases 1 and 2 to a total of $85,000 in years 24 through 33. Outlays for headstones and markers are proportional to the number of first interments in a given year. These costs vary depending on the area of the country in which the headstones and markers are purchased. For this analysis, we used the middle price in the range of prices VA said they pay. For a casket burial, we assumed a headstone cost of $120; for an in-ground cremains burial, we assumed a grave marker cost of $70; and for a columbarium burial, we assumed a niche cover cost of $15. Casket burials require grave liners, at an estimated cost of $240 apiece. Labor associated with maintenance. VA uses the standard of 1 FTE per 10.7 developed acres for casket cemeteries. Using this ratio, under the casket scenario, we estimated that maintenance of developed acreage would require 3.5 WG FTEs during phase 1 (years 4 through 13), 7 FTEs during phase 2 (years 14 through 23), and 10.5 FTEs during phase 3 (years 24 through 33), at the annual pay rates stated above. We adjusted these WG labor requirements for the fewer acres in the other modes. For in-ground cremains burials, we estimated that maintenance of developed grave sites would require 1.1 FTEs during phase 1 and an additional 1.1 FTEs during phases 2 and 3. For columbaria, we estimated that maintenance of developed grave sites would require .4 FTE during phase 1, .9 FTE during phase 2, and 1.3 FTEs during phase 3. Further, there would also be labor costs associated with the maintenance of burial sites that have already been placed in service (that is, in which there has been a first family member interment). VA uses an estimate of 1 FTE per 7,844 developed grave sites in its planning for new cemeteries. Using this ratio, it would require about .2 FTE a year for the 30-year burial period in a casket cemetery. We adjusted this amount to reflect the lesser acreage of the other modes. For in-ground cremains sites, .04 FTE per year would be required; for columbaria, .002 FTE would be required. The cost differences among the three alternatives are proportional to the differences in the number of burial acres (as opposed to infrastructure acres) that each alternative requires. For each alternative, grave site maintenance costs would increase linearly for each succeeding year, because we assumed that the same number of first family member interments (1,667) would take place each year. We also analyzed the relative long-term cost of each of the three alternatives as it applied to extending the service period of an existing cemetery. For this model, we adopted the same assumptions, and used the same data, as for the model we used to analyze the long-term cost of a new cemetery, with the following modifications: We assumed the existence of an acre of land that had already been acquired—an acre composed of parcels of land that were not contiguous to each other—so that the cost of land acquisition was zero for all three alternatives. Similarly, we assumed that such costs as environmental assessment, architect/engineer design, land development, and construction of administration and maintenance buildings had already been incurred for the casket and in-ground cremains site estimates. We assumed that for columbaria, it would be necessary to incur the cost of constructing a set of niches, including architect/engineer design costs. For each of the three alternatives, we assumed that a total of 1 acre of land, pieces of which were not contiguous to each other, could be devoted to burial sites. That is, we assumed that the cemetery’s infrastructure (for example, roads) was complete and that there were no other obstacles (such as irregular topography) to the full use of the acre for burial sites. Thus, we assumed the theoretical maximum number of interment sites: 871 for caskets; 4,840 for in-ground cremains sites; and 87,000 for columbaria. Only costs that are incurred up to the time that the acre is closed to further first family member interments are accounted for. Because, as noted above, each of the three alternatives permits a different number of interment sites per acre, and because we are assuming that first family member interments will take place at a rate of 1,667 per year, the time at which the acre’s first family member interment sites are full will be different under the three alternatives (0.52 years for caskets; 2.9 years for in-ground cremains sites; and 52.2 years for columbaria). This simplifying assumption leads to an understatement of the cost of casket burial relative to that of the other alternatives, all other things equal. Future changes in cost factors. All costs are expressed in 1997 dollars. We assumed that although the costs of labor and materials could rise in the future, the relative prices would remain unchanged. Discount rate. We used a (real) discount rate of 3.21 percent. This rate is based on (1) a (nominal) long-term cost to the government of borrowing 6.71 percent, as represented by the interest rate on 30-year Treasury securities as of June 1997, and (2) a long-term inflation rate projection of 3.5 percent that was prepared by the Social Security Administration (SSA). Period of analysis. As agreed with your office, we analyzed cost data over a period that ends 30 years after the first interments (that is, 33 years), at which time the cemeteries are assumed to be full. Ideally, a cost analysis would consider the entire useful life of the project, given that differences in operating costs among the three modes of interment would persist even if there was no new development of burial sites or new first family member interments. For a cemetery, this time period is indefinite. Accordingly, we performed a sensitivity analysis in which the present value of costs for the three modes of interments was evaluated over a period of 53 years (that is, until 20 years had elapsed since the last first family member interments). We found that when costs were evaluated over the longer period, the cost would be $58.4 million for casket burial, $24.1 million for in-ground cremains burial, and $24.8 million for columbarium burial. The differences between costs for the 33-year and 53-year periods reflect differences in operating costs across the three modes of interment, especially the fact that columbaria would require far less costly maintenance than the other two types of interment. We provided information on a cemetery providing only casket interment, another providing only interment of cremated remains in columbarium niches, and a third providing interment of in-ground cremated remains. For each type of cemetery, this appendix provides 30-year undiscounted and present value cost estimates in 1997 dollars for development and construction and operations and maintenance. We also projected the cash outlays that would be required to construct and operate a cemetery that offered each of these modes of interment over a 30-year period (see fig. II.1). Costs were based on actual figures obtained from the most recent NCS construction project—Tahoma National Cemetery. The following tables present detailed data for each type of cemetery we analyzed. Table II.1: Cost Summary for a Cemetery Offering Only Casket Burial Not applicable. Nonlabor costs include the cost of purchasing such items as grass seed, pest control, grave liners, and headstones or markers. Not applicable. Nonlabor costs include the cost of purchasing such items as grass seed, pest control, and niche covers. Not applicable. Nonlabor costs include the cost of purchasing such products as grass seed, pest control, and markers. Donald C. Snyder, Assistant Director (Economist), (202) 512-7204 Jaqueline Hill Arroyo, Evaluator-in-Charge, (202) 512-6753 Jeffrey Pounds, Evaluator Timothy J. Carr, Senior Economist The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Department of Veterans Affairs' (VA) National Cemetery System (NCS), focusing on: (1) NCS' plans for addressing veterans' future burial demands; (2) the relative 30-year costs of three types of cemeteries: casket-only internment, cremated internment in columbarium niches, and in-ground internment of cremated remains; and (3) what NCS can do to extend the service period of existing national cemeteries. GAO noted that: (1) NCS projects that demand for veterans' burial benefits will increase; (2) NCS has adopted a 5-year strategic plan with the goal of ensuring that burial in a national or state veterans' cemetery is an available option for all veterans and their eligible family members; (3) strategies outlined in NCS' plan include: (a) establishing five new national cemeteries; (b) developing available space for cremated remains; (c) acquiring contiguous land at existing cemeteries; and (d) encouraging states to provide additional burial sites through participation in the State Cemetery Grants Program; (4) the strategic plan does not tie its goals to external factors, such as the mortality rate for veterans and veterans' relative preferences for burial options, that will affect the need for additional cemetery capacity; (5) it is unclear how NCS will address burial demand during the peak years when pressure on it will be greatest, since NCS has not developed a strategic plan for beyond 2000; (6) according to NCS' Chief of Planning, beyond 2000, NCS will continue using the basic strategies outlined in its current 5-year plan; (7) NCS plans to encourage states to establish veterans' cemeteries in areas where it does not plan to operate national cemeteries; (8) fewer than half of the states have established veterans' cemeteries; (9) states also have shown limited interest in a legislative proposal to increase state participation by increasing the share of federal funding; (10) GAO estimated the present value of the costs of three types of cemeteries, each with 50,000 burial sites, over a 30-year period; (11) planning, designing, constructing, and operating a cemetery of casket grave sites and no other burial options would be the most expensive interment option available; (12) the costs for a cemetery that offered only a columbarium and one that offered only in-ground cremains sites would be about the same; (13) while the cost of a casket-only cemetery would be over $50 million, the cost of a cremains-only cemetery would be about $21 million; (14) while the majority of veterans and eligible family members prefer a casket burial, cremation is an acceptable interment option for many, and the demand for cremation continues to increase; (15) as annual internments increase, cemeteries will reach their burial capacity, increasing the importance of making the most efficient use of available cemetery space; and (16) GAO's analysis of three interment options showed that columbaria offer the most efficient interment option because they would involve the lowest average burial cost and would significantly extend a cemetery's service period. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
In February 2012, we reported that the increased seigniorage resulting from replacing $1 notes with $1 coins could potentially offer $4.4 billion in net benefits to the government over 30 years. We determined that seigniorage was the sole source of the net benefits and not lower production costs due to switching to the coin, which lasts much longer than a note. Seigniorage is the financial gain the federal government realizes when it issues notes or coins because both forms of currency usually cost less to produce than their face value. This gain equals the difference between the face value of currency and its costs of production, which reflects a financial transfer to the federal government because it reduces the government’s need to raise revenues through borrowing. With less borrowing, the government pays less interest over time, resulting in a financial benefit. The replacement scenario of our 2012 estimate assumed the production of $1 notes would stop immediately followed by a 4-year transition period during which worn and unfit $1 notes would gradually be removed from circulation. Based on information provided by the Mint, we also assumed that the Mint would convert existing equipment to increase its production capability for $1 coins during the first year and that it would take 4 years for the Mint to produce enough coins to replace the currently outstanding $1 notes. Our assumptions covered a range of factors, but key among these was a replacement ratio of 1.5 coins to 1 note to take into consideration the fact that coins circulate with less frequency than notes and therefore a larger number are required in circulation. Other key assumptions included the expected rate of growth in the demand for currency over 30 years, the costs of producing and processing both coins and notes, and the differential life spans of coins and notes. We projected our analyses over 30 years to be consistent with previous GAO analyses and because that period roughly coincides with the life expectancy of the $1 coin. As shown in figure 1, we found that the net benefit accruing each year varied considerably over the 30 years. More specifically, across the first 10 years of our 30-year analysis, replacing the $1 note with a $1 coin would result in a $531 million net loss or approximately $53 million per year in net loss to the government. The early net loss would be due in part to the up-front costs to the Mint of increasing its coin production during the transition, together with the limited interest expense the government would avoid in the first few years after replacement began. This estimate differs from our 2011 estimate, which found that replacement would result in a net benefit of about $5.5 billion over 30 years (an average of about $184 million per year) because the 2012 estimate takes into account two key actions that occurred since our 2011 report, specifically: In April 2011, the Federal Reserve began using new equipment to process notes, which has increased the expected life of the $1 note to an average of 56 months (or 4.7 years), according to the Federal Reserve, compared with the 40 months we used in our 2011 analysis.over 30 years and thus reduces the expected net benefits of replacing the $1 note with a $1 coin. The longer note life reduces the costs of circulating a note In December 2011, the Treasury Department announced that it would take steps to eliminate the overproduction of dollar coins by relying on the approximately 1.4 billion $1 coins stored with the Federal Reserve as of September 30, 2011, to meet the relatively small transactional demand for dollar coins. This new policy would reduce the cost associated with producing $1 coins that we estimated in the status quo scenario and, therefore, would reduce the net benefit, which is the difference in the estimated costs between the status quo scenario and the replacement scenario. However, like all estimates, there are uncertainties involved in developing these analyses. In particular, while the up-front costs to the Mint of increasing its coin production during the transition is reasonably certain–– in large part because it is closer in time––the longer-term benefits, particularly those occurring in the later years, involve greater uncertainty because of unforeseen circumstances that could occur farther into the future. Nonetheless, looking at a longer time period allows for trends to be seen. Moreover, changes to the inputs and assumptions used in our analysis could significantly change the estimated net benefit. For example, in 2011, we compared our status quo scenario to an alternative scenario in which the growing use of electronic payments—such as making payments with a cell phone—results in a lower demand for cash and lower net benefit. If Americans come to rely more heavily on electronic payments, the demand for cash could grow more slowly than we assumed or even decrease. By reducing the public’s demand for $1 currency by 20 percent in this alternative scenario, we found that the net benefit to the government would decrease to about $3.4 billion over 30 years. In another scenario, we reported in 2012 that if interest savings because of seigniorage were not considered, a net loss of approximately $1.8 billion would accrue during the first 10 years for an average cost of $179 million per year—or $2.8 billion net loss over 30 years. While this scenario suggests that there would be no net benefits from switching to a $1 coin, we believe that the interest savings related to seigniorage, which is a result of issuing currency, cannot be set aside because the interest savings reflects a monetary benefit to the government. Our estimates of the discounted net benefit to the government of replacing the $1 note with a $1 coin differ from the method that the Congressional Budget Office (CBO) would use to calculate the impact on the budget of the same replacement. In the mid-1990s, CBO made such an estimate and noted that its findings for government savings were lower than our estimates at that time because of key differences in the two analyses. Most important, budget scorekeeping conventions do not factor in gains in seigniorage in calculating budget deficits.expense avoided in future years by reducing borrowing needs, which accounts for our estimate of net benefit to the government, would not be part of a CBO budget-scoring analysis. Additionally, CBO’s time horizon for analyzing the budget impact is up to 10 years—a much shorter time horizon than we use in our recent analyses. Two factors merit consideration moving forward. The first factor is the effect of a currency change on the private sector. Our 2011 and 2012 reports considered only the fiscal effect on the government. Because we found no quantitative estimates that could be evaluated or modeled, our estimate did not consider factors such as the broader societal impact of replacing the $1 note with a $1 coin or attempt to quantify the costs to the private sector. Based on our interviews with stakeholders representing a variety of cash-intensive industries, we believe that the costs and benefits to the private sector should be carefully weighed since some costs could be substantial. In 2011 we reported that stakeholders identified potential shorter- and longer-term costs that would likely result from the replacement. Specifically, shorter-term costs would be those costs involved in adapting to the transition such as modifying vending machines, cash-register drawers, and night-depository equipment to accept $1 coins. Such costs would also include the need to purchase or adapt the processing equipment that businesses may need, such as coin- counting and coin-wrapping machines. Longer-term costs would be those costs that would permanently increase the cost of doing business, such as the increased transportation and storage costs for the heavier and more voluminous coins as compared to notes, and processing costs. These costs would likely be passed on to the customer and the public at large through, for example, higher prices or fees. Most stakeholders we interviewed said, however, that they could not easily quantify the magnitude of these costs, and the majority indicated that they would need 1 to 2 years to make the transition from $1 notes to $1 coins. In contrast to the stakeholders who said that a replacement would mean higher costs for their businesses, stakeholders from the vending machine industry and public transit said that the changeover might have only a minimal impact on them. For example, according to officials from the National Automatic Merchandising Association, an organization representing the food and refreshment vending industry, many of its members have already modified their vending machines to accept all forms of payment, including $1 coins. In addition, according to transit industry officials, the impact on the transit industry would be minimal since transit agencies that receive federal funds were required under the Presidential $1 Coin Act of 2005 to accept and distribute $1 coins. The second factor that merits consideration is public acceptance. Our 2012 estimate assumes that the $1 coin would be widely accepted and used by the public. In 2002, we conducted a nationwide public opinion survey, and we found that the public was not using the $1 coin because people were familiar with the $1 note, the $1 coin was not widely available, and people did not want to carry more coins. However, when respondents were told that such a replacement would save the government about half a billion dollars a year (our 2000 estimate), the proportion who said they opposed elimination of the note dropped from 64 percent to 37 percent. Yet, two more recent national-survey results suggest that opposition to eliminating the $1 note persists. For example, according to a Gallup poll conducted in 2006, 79 percent of respondents were opposed to replacing $1 notes with $1 coins, and their opposition decreased only slightly, to 64 percent, when they were asked to assume that a replacement would result in half a billion dollars in government savings each year. We have noted in past reports that efforts to increase the circulation and public acceptance of the $1 coins—such as changes to the color of the $1 coin and new coin designs—have not succeeded, in part, because the $1 note has remained in circulation. Over the last 48 years, Australia, Canada, France, Japan, the Netherlands, New Zealand, Norway, Russia, Spain, and the United Kingdom, among others, have replaced lower-denomination notes with coins. The rationales for replacing notes with coins cited by foreign government officials and experts include the cost savings to governments derived from lower production costs and the decline over time of the purchasing power of currency because of inflation. For example, Canada replaced its $1 and $2 notes with coins in 1987 and 1996, respectively. Canadian officials determined that the conversion to the $1 coin saved the Canadian government $450 million (Canadian) between 1987 and 1991 because it no longer had to regularly replace worn out $1 notes. However, Canadian $1 notes did not last as long as $1 notes in the United States currently do. Stopping production of the note and actions to overcome public resistance have been important in Canada and the United Kingdom as the governments transitioned from a note to a coin. While observing that the public was resistant at first, Canadian and United Kingdom officials said that with the combination of stakeholder outreach, public relations efforts, and ending production and issuance of the notes, public dissatisfaction dissipated within a few years. Canada undertook several efforts to prepare the public and businesses for the transition to the coin. For example, the Royal Canadian Mint reached out to stakeholders in the retail business community to ensure that they were aware of the scope of the change and surveyed public opinion about using coins instead of notes and the perceived impact on consumer transactions. The Canadian Mint also proactively worked with large coin usage industries, such as vending and parking enterprises, to facilitate conversion of their equipment, and conducted a public relations campaign to advise the public of the cost savings that would result from the switch. According to Canadian officials, the $1 and $2 coins were the most popular coins in circulation and were heavily used by businesses and the public. In our analysis of replacing the $1 note with a $1 coin, we assumed that the U.S. government would conduct a public awareness campaign to inform the public during the first year of the transition and assigned a value of approximately $7.8 million for that effort. In addition, some countries have used a transition period to gradually introduce new coins or currency. For example, the United Kingdom issued the £1 coin in April 1983 and continued to simultaneously issue the £1 note until December 1984. Similarly, Canada issued the $1 coin in 1987 and ceased issuing the $1 note in 1989. In our prior reports, we recommended that Congress proceed with replacing the $1 note with the $1 coin. We continue to believe that the government would receive a financial benefit from making the replacement. However, this finding comes with several caveats. First, the costs are immediate and certain while the benefits are further in the future and more uncertain. The uncertainty comes, in part, from the uncertainty surrounding key assumptions like the future demand for cash. Second, the benefits derive from seigniorage, a transfer from the public, and not a cost-saving change in production. Third, these are benefits to the government and not necessarily to the public at large. In fact, public opinion has consistently been opposed to the $1 coin. Keeping those caveats in mind, many other countries have successfully replaced low denomination notes with coins, even when initially faced with public opposition. Chairman Paul, Ranking Member Clay, and members of the Subcommittee, this concludes my prepared statement. I would be pleased to answer any questions at this time. For further information on this testimony, please contact Lorelei St. James, at (202) 512-2834 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Teresa Spisak (Assistant Director), Lindsay Bach, Amy Abramowitz, Patrick Dudley, and David Hooper. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Since coins are more durable than notes and do not need replacement as often, many countries have replaced lower-denomination notes with coins to obtain a financial benefit, among other reasons. Six times over the past 22 years, GAO has reported that replacing the $1 note with a $1 coin would provide a net benefit to the federal government of hundreds of millions of dollars annually. This testimony provides information on what GAOs most recent work in 2011 and 2012 found regarding (1) the net benefit to the government of replacing the $1 note with a $1 coin, (2) stakeholder views on considerations for the private sector and the public in making such a replacement, and (3) the experiences of other countries in replacing small-denomination notes with coins. This testimony is based on previous GAO reports. To perform that work, GAO constructed an economic model to assess the net benefit to the government. GAO also interviewed officials from the Federal Reserve and Treasury Department, currency experts, officials from Canada and the United Kingdom, and representatives of U.S. industries that could be affected by currency changes. GAO reported in February 2012 that replacing $1 notes with $1 coins could potentially provide $4.4 billion in net benefits to the federal government over 30 years. The overall net benefit was due solely to increased seigniorage and not to reduced production costs. Seigniorage is the difference between the cost of producing coins or notes and their face value; it reduces government borrowing and interest costs, resulting in a financial benefit to the government. GAOs estimate takes into account processing and production changes that occurred in 2011, including the Federal Reserves use of new equipment to determine the quality and authenticity of notes, which has increased the expected life of the note thereby reducing the costs of circulating a note over 30 years. (The $1 note is expected to last 4.7 years and the $1 coin 30 years.) Like all estimates, there are uncertainties surrounding GAOs estimate, especially since the costs of the replacement occur in the first several years and can be estimated with more certainty than the benefits, which are less certain because they occur further in the future. Moreover, changes to the inputs and assumptions GAO used in the estimate could significantly increase or decrease the results. For example, if the public relies more heavily on electronic payments in the future, the demand for cash could be lower than GAO estimated and, as a result, the net benefit would be lower. In March 2011, GAO identified potential shorter- and longer-term costs to the private sector that could result from the replacement of the $1 note with a $1 coin. Industry stakeholders indicated that they would initially incur costs to modify equipment and add storage and that later their costs to process and transport coins would increase. However, others, such as some transit agencies, have already made the transition to accept $1 coins and would not incur such costs. In addition, for such a replacement to be successful, the $1 coin would have to be widely accepted and used by the public. Nationwide opinion polls over the last decade have indicated lack of public acceptance of the $1 coin. Efforts to increase the circulation and public acceptance of the $1 coins have not succeeded, in part, because the $1 note has remained in circulation. Over the last 48 years, many countries, including Canada and the United Kingdom, have replaced low denomination notes with coins because of expected cost savings, among other reasons. The Canadian government, for example, saved $450 million (Canadian) over 5 years by converting to the $1 coin. Canada and the United Kingdom found that stopping production of the note combined with stakeholder outreach and public education were important to overcome public resistance, which dissipated within a few years after transitioning to the low denomination coins. GAO has recommended in prior work that Congress replace the $1 note with a $1 coin. GAO continues to believe that replacing the $1 note with a coin is likely to provide a financial benefit to the federal government if the note is eliminated and negative public reaction is effectively managed through stakeholder outreach and public education. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Davis-Bacon Act was enacted in 1931, in part, to protect communities and workers from the economic disruption caused by contractors hiring lower-wage workers from outside their local area, thus obtaining federal construction contracts by underbidding competitors who pay local wage rates. Davis-Bacon generally requires employers to pay locally prevailing wages and fringe benefits to laborers and mechanics employed on federally funded construction projects in excess of $2,000. The Recovery Act requires all laborers and mechanics employed by contractors and subcontractors on projects funded directly or assisted by the federal government through the Recovery Act also be paid at least the prevailing wage rate under Davis-Bacon. Our previous work found 40 programs, such as the Weatherization Assistance Program, newly subject to Davis- Bacon requirements as a result of the Recovery Act’s prevailing wage provision. Of these, 33 programs existed prior to the Recovery Act but were subject to the Davis-Bacon requirements for the first time, and 7 were newly created programs. In 2009, federally funded construction and rehabilitation, including projects funded through the Recovery Act, totaled about $220 billion. Labor administers the Davis-Bacon Act through its Wage and Hour Division, which conducts voluntary surveys of construction contractors and interested third parties on both federal and nonfederal projects to obtain information on wages paid to workers in each construction job classification by locality. It then uses the data submitted on these survey forms to determine local prevailing wage and fringe benefit rates. In 2002, Labor began conducting simultaneous statewide surveys for all four of its construction types: highway, residential, building, and heavy. Labor describes highway construction as the construction, alteration, or repair of roads, streets, highways, runways, alleys, trails, parking areas, and other similar projects not incidental to building or heavy construction. Residential construction includes single-family homes and apartment buildings that are not more than four stories. If a structure that houses people is over four stories or if it houses machinery, equipment, or supplies, it is considered building construction. Heavy construction generally includes any project that does not fall into the other three categories—for example, dam and sewer projects. Labor determines which states it will survey each year based on a variety of factors, including the date of a state’s most recent survey, planned federal construction, and complaints or requests from interested parties on current wage determinations. The calculated wage and fringe benefit rates that result from the surveys are posted online in wage determinations and used by contractors working on federal construction projects to prepare bids and pay workers. Both GAO and the Labor OIG have reported concerns with Labor’s wage determination process. In 1996, we found Labor had internal control weaknesses that contributed to lack of confidence in the wage determinations, including limitations in Labor’s verification of wage and fringe benefit data, its computer capabilities, and an appeals process that was difficult for interested parties to access. In 1997, the OIG found much of the data it examined to be inaccurate and potentially biased due to weaknesses in survey methodology. For fiscal year 1997, Congress directed $3.75 million toward improvements to the wage determination process. Using five criteria—feasibility/viability, timeliness, accuracy, completeness, and cost—Labor evaluated two options: Reengineering: Apply new technologies and processes to the existing Davis-Bacon survey program to increase participation in and improve the accuracy and timeliness of the surveys. Reinvention: Use existing Bureau of Labor Statistics (BLS) data, specifically data from BLS’s Occupational Employment Statistics survey and National Compensation Survey, as the primary basis for Davis-Bacon wage determinations. In 1999, as Labor was evaluating these options, we again reviewed the wage determination process and found, in response to a directive from a congressional committee and our recommendation, Labor had implemented a program to verify a sample of wage survey data, including verifying data on site using employer payrolls. However, we agreed with the OIG that verification efforts be viewed as temporary steps until more fundamental reforms could be made to Labor’s survey methodology. We also found that reengineering or reinvention had the potential to improve the accuracy and timeliness of the wage determination process. In January 2001, Labor reported to Congress it would pursue reengineering. Labor concluded that reinvention (using BLS data) would have the benefits of accuracy and timeliness, but presented challenges, including difficulty in determining fringe benefits and in producing wage estimates for a broad range of construction job classifications. Reengineering, which included improvements to the wage survey form (including a scannable form and online version) and a computer system to assist with data clarification and analysis, would make it feasible to survey every area of the country for all four construction types no less than every 3 years, Labor concluded. In 2004, the Labor OIG found Labor’s reengineering had not resolved past concerns. In a sample of wage survey forms (known as WD-10s) from before and after reengineering, the OIG found errors in almost 100 percent of verified survey forms. The OIG said these errors occurred even with a revised WD-10, the introduction of an online WD-10, and efforts by Labor analysts to review and correct data. Mistakes in survey data included respondents using incorrect peak weeks, miscounts in the number of workers in each job classification, and misreporting of wage rates—for example, reporting one wage rate for a job classification when two or more wage rates existed. In addition, the OIG reported concerns about bias because only contractors with the personnel to complete WD-10s may respond and some may not participate to avoid involvement with the government. The OIG also found that higher participation by either unions or nonunion contractors could potentially weight the wage and benefit rates in their favor. Finally, the OIG noted there had been little improvement since its 1997 review in the time required to issue wage determinations. The current survey process, which conducts statewide surveys for all construction types, consists of five basic phases (see fig. 1). Prior to the start of a survey, Labor identifies the state, construction types, and survey time frame—the time period in which a construction project needs to be active to meet survey criteria—and requests that CIRPC provide a report on active construction projects for the identified time frame, construction type, and geographical area. F.W. Dodge Reports for those projects are then ordered and reviewed to ensure they meet the basic criteria of the survey. Once a survey is scheduled, Labor usually conducts pre-survey briefings for interested parties to clarify survey procedures and provide information on how data should be submitted. Labor then sends surveys to general contractors identified through the Dodge Reports and relevant interested parties in the area to be surveyed. (See app. II for a copy of the wage survey.) It also requests information from federal agencies on construction projects that meet survey criteria. A follow-up letter is sent to general contractors who do not respond. Subcontractors, identified by the general contractors, are also sent an initial letter with a survey and a follow- up letter if they do not respond. Completed wage survey forms are returned by either contractors or interested parties and are reviewed, under a contract with Labor, by CIRPC, which matches submitted information with its construction project and forwards it to the appropriate Labor regional office. The regional offices clarify missing, ambiguous, or inconsistent information to the extent possible, and pull random samples of wage survey forms to verify by phone or on site. Officials request that supporting payroll documentation be sent to the regional office. For on-site verification, Labor contracts with a private accounting firm whose auditors review payroll records. Any discrepancies between the wage survey form and the contractor’s payroll records are reviewed and corrected in the survey data by Labor regional staff. Contractors selected for verification, who are not able or willing to provide payroll records, can still be included in the survey in most cases. See appendix III for a more detailed description of the wage determination process. Labor uses several procedures to calculate wage rates and determine if it has sufficient information from collected and verified surveys to issue a wage determination—a compilation of prevailing wage rates for multiple job classifications in a given area. In determining a prevailing wage for a specific job classification, Labor considers sufficient data to be the receipt of data on at least three workers from two different employers in its designated area who have that job. Then, in accordance with its regulations, Labor uses a “50- percent rule” to calculate the prevailing wage. The 50-percent rule states the prevailing wage is the wage paid to the majority (over 50 percent) of workers employed in a specific job classification on similar projects in the area. If the same rate is not paid to a majority (over 50 percent) of workers in a job classification, the prevailing wage is the average wage rate weighted by the number of employees for which that rate was reported. In cases where the prevailing rate is also a collectively bargained, or union, rate, the rate is determined to be “union-prevailing.” According to Labor’s policy, union- prevailing wage rates in wage determinations can be updated when there is a new collective bargaining agreement (CBA) without Labor conducting a new survey. Nonunion-prevailing wage rates are not updated until a new survey is conducted. To issue a wage determination for a construction type in a given area, Labor must, according to its procedures, also have sufficient data to determine prevailing wages for at least 50 percent of key job classifications. Key job classifications are those determined necessary for one or more of the four construction types. By statute, Labor must issue wage determinations based on similar projects in the “civil subdivision of the state” in which the federal work is to be performed. Labor’s regulations state the civil subdivision will be the county, unless there are insufficient wage data. When data from a county are insufficient to issue a wage rate for a job classification, a group of counties is created by combining a rural county’s data with data from one or more contiguous rural counties. A metropolitan county’s data are combined with data from other counties in the state within the metropolitan statistical area (MSA). If data are still insufficient to issue a wage rate, a supergroup is created by combining a rural county’s data with data from additional contiguous rural counties, or a metropolitan county’s data are combined with county data from other MSAs or the consolidated MSA counties. Finally, if this supergroup still does not provide sufficient wage data to issue a wage rate for a job classification, a statewide rate is created by combining data for all rural counties or all metropolitan counties in the state. Counties are combined based on whether they are metropolitan or rural, and cannot be mixed. Once wage determinations are issued, an interested party may seek reconsideration and review through an appeals process. See figure 2 for an example of how wage data from Miami-Dade County, Florida, are combined, as needed, with data from other counties to create group, supergroup, and state wage rates. Labor has taken several steps over the last few years to address issues with its Davis-Bacon wage surveys, including completing a number of open surveys and changing how it collects and processes some survey data in its efforts to improve timeliness and accuracy. However, these efforts may not achieve Labor’s desired results. We found some surveys initiated under the new process are behind schedule and some published wage rates are based on outdated data. In 2007, Labor officials decided not to initiate any new surveys in order to finalize and publish results from 22 open surveys, which accumulated after Labor began conducting statewide surveys in 2002. Regional office officials said it was difficult and time-consuming to clarify and verify data in these surveys because contractors often did not have easy access to records for survey data which, in some cases, had been submitted several years earlier. As of September 1, 2010, results from 20 of the 22 surveys were published and results from the remaining 2 were in the process of being published. Officials said once results from all 22 surveys are published, they will be able to focus on more recent surveys, which will reduce delays in processing and increase accuracy because more recently collected information is easier and less time-consuming to clarify and verify with contractors. Labor also changed how it collects survey data for its four construction types after it conducted an informal review in 2009. Labor officials said they had been using a “one size fits all” approach to surveys and were not accounting for differences in types of construction activity, the demographic characteristics of a given state, and available sources of wage data. To address these differences, Labor began surveying some of its four construction types separately instead of surveying all construction types simultaneously in a given state. Labor also began using certified payrolls as the primary data source for highway surveys. Labor officials said most highway construction has a federal component and certified payrolls provide accurate and reliable wage data. Officials also said using certified payrolls eliminates the need for on-site verification of reported wage data, although Labor continues to survey interested parties. Officials estimate these efforts will reduce processing time for highway surveys by more than 80 percent, or from about 42 months to 8 months. Labor adjusted its survey processes for residential, building, and heavy construction types as well. For surveys of residential construction, Labor plans to phone contractors and unions and visit contractor associations to increase a historically low response. Officials said these collection methods will be possible because of the small number of residential projects compared to other construction types. Labor began conducting a new residential survey in 2010. For building and heavy construction, Labor started a pilot with five surveys in 2009, adjusting survey time frames— the time period in which a construction project has to be active for it to meet survey criteria—to better manage the quantity of data received. Labor found its previous 1–year survey time frame produced, in some cases, too many or too few responses for building and heavy surveys. Instead, by adjusting the survey time frame to account for the number of projects in a particular region (with shorter time frames for areas in which there are many active projects), Labor expects to reduce the time needed to process surveys and determine prevailing wages. Overall, Labor estimates these changes will reduce processing time for building and heavy surveys by approximately 54 percent, or from about 37 months to 17 months. Labor also revised its approach to processing data for all surveys. Labor’s regional offices began reviewing and analyzing survey forms when they are received rather than waiting until a survey closes. Labor officials said this processing of data in “real time” will improve timeliness and accuracy because survey respondents will be better able to recall the submitted information when contacted by regional office staff for clarification and verification. While it is too early to fully assess the effects of Labor’s 2009 changes, our review found timeliness is still an issue and improvements expected from processing changes may not be fully realized. Of the 16 surveys started under Labor’s new processes at the time of our review, we were unable to analyze the timeliness of 4—3 highway surveys and 1 building and heavy survey—because of unclear dates in Labor’s data. A senior Labor official said regional offices differed as to when they recorded dates for key survey activities, and we found some recorded dates were out of sequence. During the course of our review, the senior Labor official said regional offices will consistently enter key dates for future surveys, which will allow Labor to better assess whether new processes are improving timeliness. Of the remaining 12 surveys for which we were able to assess timeliness, 8 were highway surveys for which Labor requested certified payrolls. Of those 8, we found 6 were behind schedule, 1 was on schedule, and 1 had not started as of September 1, 2010 (see fig. 3). A senior Labor official said staff did not immediately start processing all certified payrolls— requested for all federal projects within a specific 1-year period—when they were received because of regional office workloads. As a result, some certified payroll data were months old before Labor surveyed interested parties. For example, as of September 1, 2010, certified payroll data for the Florida 2009 highway survey were 8 months old, though Labor had not yet surveyed interested parties. Moreover, processing certified payrolls may be labor-intensive and time-consuming. A senior Labor official said the agency cannot predict how many certified payrolls will be submitted by state departments of transportation and often receives boxes of documents for each survey. Some regional office officials said extracting information from certified payrolls is difficult because of inconsistent formats and frequently requires clarification with contractors. To address these potential delays, a senior Labor official said they are considering collecting certified payrolls monthly from states with upcoming surveys, and processing the payrolls as they are received. The remaining 4 surveys were building and heavy surveys and all were behind schedule as of September 1, 2010 (see fig. 4). In conducting a “universe” or “census” survey of all active construction projects within a designated time frame and area, Labor accepts data from a variety of sources, including contractors and interested parties. As a result, the number of returned survey forms and the time required to clarify data can vary widely. For example, for 14 surveys conducted under past processes, the number of survey forms received for each ranged from less than 2,000 to over 8,000, and the average processing time for data clarification and analysis ranged from 10 months to more than 40. After the 2009 changes, Labor estimates survey data clarification and analysis will take about 1 to 7 months, depending on construction type. Some of the anticipated time savings, particularly for building and heavy surveys, is based on managing fewer forms because of its focus on the number of projects in a particular region rather than a 1-year time frame. However, by accepting data submitted by contractors and interested parties on any relevant project as part of its universal survey approach, Labor is limited in its ability to predict how many forms will be returned and the time needed to process them. The more time required, the more likely wage rates will be outdated when published in wage determinations. In addition, Labor cannot entirely control when it receives survey forms. Though Labor officials said processing survey forms as they are received will improve timeliness, some regional office officials told us this “real time” processing approach has a limited effect because the bulk of the forms are returned on the last day of a survey. Additionally, officials in two of the three regional offices we visited said this new approach is not substantially different from their previous procedure. Since our site visits, a senior Labor official said analysts at regional offices have noticed a difference between processing forms in “real time” and their previous procedure, and that increased use of online submissions is expected to help reduce last-minute survey returns. To address such challenges, OMB guidance suggests agencies consider the benefits and costs of conducting a sample survey instead of a census survey. According to OMB, a sample can be used to ensure data quality in a way that is often more efficient and economical than a census. The fact that Labor is behind schedule on surveys even with the new 2009 processes may affect the agency’s ability to update the many published nonunion-prevailing wage rates, which are several years old. Labor’s fiscal year 2010 performance goal was for 90 percent of published wage rates for building, heavy, and highway construction types to be no more than 3 years old. Our analysis of published rates for these three construction types found 61 percent were 3 years old or less as of November 12, 2010. However, this figure is somewhat misleading because it includes both union-prevailing and nonunion-prevailing wage rates, which differ in how they are updated. Union-prevailing rates, which constitute almost two- thirds of the over 650,000 published building, heavy, and highway rates, may be updated when new CBAs are negotiated, and we found almost 75 percent of those rates were 3 years old or less as of November 12, 2010. However, 36 percent of nonunion-prevailing rates, which are not updated until Labor conducts a new survey, were 3 years old or less, and almost 46 percent were 10 or more years old. One regional office official and two stakeholders we interviewed said Labor, in some cases, has had to update nonunion-prevailing rates without a new survey because they no longer complied with the federal minimum wage. Moreover, wage rates at the time of publication may reflect wage data from several years prior due to processing delays. For example, of the 20 open surveys for which Labor had published results as of September 1, 2010, 9 published in 2009 or 2010 were based on data 5 or more years old at the time of publication and, of those, 3 were based on data 7 or more years old. Though these survey results were only recently published, the age of the wage data they contain means those states will likely need to be resurveyed soon. Several of the union and contractor association officials we interviewed said the age of the Davis-Bacon nonunion-prevailing rates means they often do not reflect actual prevailing wages. As a result, they said it is more difficult for both union and nonunion contractors to successfully bid on federal projects because they cannot recruit workers with artificially low wages but risk losing contracts if their bids reflect more realistic wages. Labor officials said the only way to correct the age disparity between union- and nonunion-prevailing rates is to conduct surveys more frequently; however, some regional office officials said the goal to survey each area every 3 years is not feasible with current processes. Those who said it is feasible cited the need for adequate technology and staffing, which they said is not in place in all regional offices. Although Labor has made recent changes to data collection and processing, some critical problems with its survey methodology have not been addressed. Our review identified persisting shortcomings in the representativeness of survey results and the sufficiency of data gathered for Labor’s county-focused wage determinations. OMB guidance states that agencies need to consider the potential impact of response rate and nonresponse on the quality of information obtained from a survey, and suggests agencies consult with trained survey methodologists when designing surveys to address this issue. Rather than conducting a formal evaluation of the wage survey process and consulting with experts in survey design and methodology, a senior Labor official said the agency based changes on an informal review that drew on staff experiences. While our prior work has shown it is reasonable and desirable to obtain input from knowledgeable staff, technical guidance from experts is considered critical to ensure the validity and reliability of survey results. Labor cannot determine whether its Davis-Bacon survey results are representative of prevailing wage rates because it does not currently calculate response rates or conduct a nonresponse analysis. According to OMB, response rate calculation and nonresponse analysis are important because a low response rate may mean survey results are misleading or inaccurate if those who respond to a survey differ substantially and systematically from those who do not respond. A Labor official said that when the agency started conducting statewide surveys in 2002, it stopped calculating overall response rates because of the large volume of data received and challenges in tracking who submitted specific information. In addition, the official said Labor could not collect enough data to meet its then-standard of data on at least six workers from three different employers for each job classification, so it changed the standard to its current three workers from two employers. This standard can be met using data from a single county, multiple counties within a state, or statewide. Also, aside from a second letter sent automatically to survey nonrespondents, Labor does not currently have a program to systematically follow up with or analyze all nonrespondents. Labor’s own procedures manual recognizes nonresponse as a potential source of survey bias and indicates there is a higher risk nonrespondents will be nonunion contractors because they may have greater difficulty in compiling wage information or be more cautious about reporting wage data. Despite this guidance, regional office officials said they spend the bulk of their time clarifying data received. Of Labor’s published wage rates as of November 12, 2010, about 63 percent were union-prevailing; in contrast, about 14 percent of construction workers nationwide were represented by unions in 2010, according to BLS figures. Several of the stakeholders we interviewed said the fact that Labor does not ensure the representativeness of the survey responses reduces the accuracy of the published wage rates. In addition, some regional office officials said statistical sampling may make wage rates more accurate, although they cautioned that some contractors or interested parties may not support a change to sampling if it meant they would be excluded from participating in the survey. During the course of our review, a senior official said Labor is taking steps to again calculate response rates, beginning with updates to the survey database and changes to the survey form, which will more clearly identify who submitted wage information. However, because Labor has not yet fully implemented these changes, it is unclear if they will lead to improving the quality of the survey. Although its regulations state the county will normally be the civil subdivision for which a prevailing wage is determined, Labor is often unable to issue wage rates for job classifications at the county level because it does not collect enough data to meet its current sufficiency standard of wage information on at least three workers from two employers. In the results from the four surveys we reviewed—Florida 2005, Maryland 2005, Tennessee 2006, and West Texas Metropolitan 2006—Labor issued about 11 percent of wage rates for key job classifications using data from a single county (see fig. 5). About 22 percent of the wage rates were issued at the group level (combined data from a group of counties within the same state) and about 20 percent at the supergroup level (combined data from other groups of counties within the same state). Almost 40 percent of the wage rates were issued at the statewide level incorporating data from either all metropolitan or all rural counties in the state. The remaining 7 percent were issued for combined counties for which the geographic calculation level was not available. (For more information on how the geographic level of issued wage rates varied by construction type and by metropolitan and rural rates, see app. I.) In 1997, Labor’s OIG reported that issuing rates by county may cause wage decisions to be based on an inadequate number of responses. In our review of the four surveys, we found one-quarter of the final wage rates for key job classifications were based on wages reported for six or fewer workers (see fig. 6). (For more information on how the number of workers used to determine rates varied by construction type and by metropolitan and rural rates, see app. I.) In the surveys we reviewed, we also found Labor sometimes determined prevailing wages based on small amounts of data even in metropolitan areas. For example, in the 2005 survey of building construction in Florida, the prevailing wage rate for a forklift operator in Miami-Dade County was based on wages reported for five workers statewide. The statutory requirement to issue Davis-Bacon prevailing wages based on a “civil subdivision of the state” also limits Labor’s options to address inadequate data. For example, Labor is not able to augment its survey data with data from other sources because those sources may draw from other geographic areas, such as MSAs, which are not the same as civil subdivisions. Officials from Labor’s survey contractor, CIRPC, said one way to improve accuracy is to survey areas other than counties. CIRPC officials said the current wage survey uses arbitrary geographic divisions, in contrast to other groupings, such as the economic areas used by the Bureau of Economic Analysis, which are based on relevant regional markets that frequently cross county and state lines. These groupings, they said, are more reflective of area wage rates. Some stakeholders said the focus on county-level wage rates results in the publication of illogical rates. One contractor association representative said metropolitan statistical areas would be more appropriate in New York, for example, because there is a larger difference in wages between upstate and downstate New York than between the counties containing the cities of Rochester, Syracuse, and Buffalo. Another contractor association representative said the geographic divisions used by Labor for prevailing wages are illogical for projects not confined to a single county, offering the example of a contractor paving a road that crossed a county line and who was forced to pay workers different wage rates based on which side of the line they worked. In our interviews with stakeholders about additional issues with Labor’s wage determination process, they provided several reasons why contractors have little or no incentive to participate in the Davis-Bacon wage survey. First, 19 of 29 stakeholders said contractors may not have the time or resources to respond. An employee for one contractor said she had returned the wage survey but might not have had she known it was voluntary because her company was short-staffed. Other stakeholders said contractors might not see the survey as a priority. Second, 16 stakeholders said contractors either may not understand the purpose of the survey or do not see the point in responding because they believe the prevailing wages issued by Labor are inaccurate. Third, 10 stakeholders said contractors may be reluctant to provide information to the government because they view it as proprietary or fear that doing so will subject them to audits. Finally, 8 stakeholders said contractors who do not work on public projects may not understand the survey is soliciting wage data from private as well as public projects so they do not think they need to respond. For instance, representatives from one state contractor association said some contractors believe the wage survey only serves to perpetuate established rates because wage surveys sent by Labor may have the names of projects subject to Davis-Bacon already entered on the form. Officials we interviewed in Labor regional offices echoed many of these concerns. They said contractors either think their survey responses will not make a difference in the determination of prevailing wages or are unaware they are being asked to submit information on private projects. A contributing factor, one official said, is that the survey announcement letter may not clearly communicate it is soliciting information on both public and private construction. In our review of the contractor announcement letter, we found it states that requested information will be used to set prevailing wages and asks the contractor to fill out the wage survey for the construction project listed on the form and any additional projects that fit survey criteria. But the letter does not specifically state that Labor is soliciting data for both public and private projects. (See app. IV for copies of the survey announcement letters sent to contractors and interested parties.) Additionally, some regional office officials said larger contractors may be more likely to respond because they have more resources, including administrative personnel, to complete the survey form. They said contractors also may not respond because they find the form complicated or do not understand its importance. Yet if contractors call the regional office and Labor staff have an opportunity to explain the reason for the survey and answer questions, many of those callers seem more receptive to participating, some regional office officials said. A lack of survey participation by those on private construction projects could result in Labor having to use data from federal projects, which are already paying Davis-Bacon wages, to set prevailing wages for building and residential construction. Per its regulations, Labor uses federal project data in all highway and heavy surveys, but it only uses federal project data in building and residential surveys when it lacks sufficient data from nonfederal projects. In the results from the four surveys we reviewed, almost one- quarter of the building wage rates and over two-thirds of the residential rates for the 16 key job classifications, such as carpenter and common laborer, included federal data. (For more information on how the percentage of federal data varied by metropolitan and rural rates, see app. I.) While 19 of the 27 contractors and interested parties we interviewed said the wage survey form, which Labor officials said was last updated in 2004, is generally easy to understand, some identified challenges in completing specific sections. For example, five stakeholders said it is difficult to know which job classification applies to their workers. Representatives from one national contractor association said they had previously informed Labor the survey form does not reflect nonunion industry practices and contractors may not track data in a way that makes it easy to fill out the form. As a result, they said most nonunion contractors opt not to return the wage survey rather than attempt to break down their data to fit its format. Other state contractor association representatives said workers on some construction sites today perform tasks across multiple job classifications; for example, a carpenter may also perform some tasks of a laborer. Yet the survey form asks contractors to provide wages for a worker by a single job classification. In addition, officials from one state local union said, to assist contractor participation in the survey, they created and distributed their own spreadsheet for contractors to fill out because they thought it would be more easily understood than Labor’s wage survey form. Labor reported to Congress in 2006 that use of the scannable survey form resulted in submission of more complete data, but our analysis of reports for four state surveys found most verified forms still had errors. During on- site verification, Labor’s contracted accounting firm compares clarified wage survey data to a sample of contractor payroll records and reports any discrepancies. These auditor reports show mistakes occurred most often in the number of employees reported in each job classification, listed hourly and fringe benefit wage rates, and project dollar value, some of which were also issues in the 2004 Labor OIG report. A senior Labor official said one reason contractors make errors on the form may be because they fill it out from memory rather than consulting their payroll records. Officials said they expect such errors to decrease under the new survey processes as Labor analysts clarify contractor-submitted data sooner. Some of these errors may be due to the fact that Labor did not pretest its current survey form with respondents. Officials said they are planning another update to address portions of the form that consistently confuse respondents. These include not having a place to note an “interested party,” rather than a “contractor” or “subcontractor,” is filling out the survey form, as well as improvements to the section on job classifications and fringe benefits. Labor officials said they have solicited input on potential revisions from CIRPC; their on-site verification contractor; the U.S. Census Bureau, which is contracted to mail out the survey forms for Labor; and their regional offices. During our interviews, a Labor official said the agency would like to solicit input on proposed changes from survey respondents, but could not provide specifics. Although part of Labor’s on-site verification process is to ask contractors questions about using the current form, Labor needs feedback on proposed changes to assess whether they will accomplish the goals of eliminating confusion and reducing errors. OMB guidance states that careful questionnaire design and pretesting can reduce measurement error and provide insights into how alternative wording can affect survey respondents’ answers. Pretesting the new survey form with respondents to ensure changes achieve the desired results will be particularly important given that a Labor official said changing the form is a major undertaking. Labor officials did not have a specific time frame for implementing the new form because they said they are waiting for upgrades to the wage survey data system and their first priority is improving the online version of the form. Planned improvements to the online version include allowing respondents to save information rather than having to complete a survey before exiting. Seven stakeholders we interviewed agreed the ability to fill out the form online was important, but four of the seven were unaware it was already an option. Labor’s Davis-Bacon prevailing wage rates are publicly reported online at Wage Determinations Online for use by contractors and others to prepare bids for and pay workers on federal construction projects. While 6 of 27 stakeholders we interviewed said the general contractor provided the necessary wage information or they found the online wage determinations relatively easy to use, others reported problems. For example, while OMB and Labor guidance on data quality states that “influential” financial information provided by the agency should include a high level of transparency on data and methods, 15 stakeholders said there is a lack of transparency in the wage determinations because key information is not available or hard to find. In addition, both union and nonunion stakeholders said Labor’s wage determination Web site should more clearly present information on the number of workers and wage rates used to calculate prevailing wages for each job classification. Labor currently makes some of this information available in a report known as a WD-22. The printed WD-22 provides, for each job classification, information on the final prevailing wage and fringe benefit rates, the total number of workers reported, and the method of rate calculation—for example, whether the rate was based on a majority or an average (see fig. 7). A WD-22 is created for each state survey by construction type, but this information is not available on Labor’s wage determination Web site. A senior Labor official said the WD-22 information is currently available upon request; though, the agency is considering posting it online along with other information used to determine wage rates. In the listing above, the "SU" designation means that rates listed under the identifier do not reflect collectively bargained wage and fringe benefit rates. Other designations indicate unions whose rates have been determined to be prevailing. Labor also changes the date at the top of a wage determination each calendar year in a “roll-over” process. Officials said the date is changed to inform users the posted wage rates are valid for the current year, but the wage rates contained in the determination are not necessarily updated. In the Florida example (see fig. 8), the date at the top of the wage determination is October 8, 2010, but wage rates associated with the “SU,” or survey, designator on the lower half of the page are from May 22, 2009, the publication date of the survey used to set those rates. A senior Labor official was not aware of users confusing the roll-over date on the wage determination with the survey publication date. However, OMB guidance states that when disseminating information products to users, key variables should be defined and the time period covered by the information and the date last updated should be provided. Not clearly explaining each of these dates within the wage determination reduces the transparency of when the last survey was conducted for an area, especially if many years have passed. Additionally, if the wage determination only contains union-prevailing rates, it does not contain any information about when the area was last surveyed. Finally, 9 of 27 stakeholders said missing wage rates are also a challenge. Specific job classifications may be missing from a wage determination if Labor received insufficient survey data. If job classifications are missing, contractors do not know what to bid on federal projects because they do not know what they will have to pay some workers, workers do not know what pay they will receive, and federal contracting agencies cannot accurately estimate costs. When a wage rate for a job classification is missing from the wage determination, it must be requested from Labor through a conformance process. While federal projects have contracting officers who typically request the conformance on behalf of the contactor, eight stakeholders said the contracting officers may not be familiar with the prevailing wages or the conformance process. Representatives from one national contractor association said the difficulty of bidding on projects when wage rates are missing, and then having to file a conformance request in order to know what to pay, can deter smaller contactors who might otherwise be interested in federal work. A Labor official said the rates issued via conformance requests—an average of over 3,000 per year were filed in fiscal years 2007, 2008, and 2009—are only good for the specific project on which they are issued and many are repeated requests for job classifications for workers who operate specific pieces of highway construction equipment. The best way to reduce conformance requests, the official said, is to conduct surveys that report wage rates for all job classifications. The pre-survey briefing is one of Labor’s primary outreach efforts to inform stakeholders about an upcoming survey. These briefings are conducted by regional office staff either before or at the start of a survey. A headquarters Labor official said regional offices notify state contractor associations and work through the Building & Construction Trades Department to notify unions about pre-survey briefings and ask them to pass the information along to their members. While the official said there is no required number of pre-survey briefings, regional office officials said they ranged from one briefing for two states to five briefings within one state for recent surveys depending on a state’s size and characteristics. Officials said they generally hold separate briefings for unions and nonunion contractors/contractor associations. The presentation includes information on how wage and fringe benefit data are obtained and compiled, sufficiency requirements for issuing rates and wage determinations, and the process for filing conformances and wage determination appeals. A headquarters official said they are currently revising the presentation’s information on how to fill out the survey form. Stakeholder awareness of the pre-survey briefings was mixed. In three states surveyed for building and heavy construction in either 2009 or 2010—Arizona, North Carolina, and West Virginia—all the union representatives we interviewed said they were aware of the pre-survey briefing and representatives from four of the six state contractor associations we interviewed said they were aware a briefing had been conducted. Of the 12 contractors we interviewed in Florida and New York who were last surveyed in 2005 and 2006, respectively, none were aware that a briefing had been conducted prior to the survey. Several regional office officials said the pre-survey briefings for unions generally have greater attendance than those for contractors. While one stakeholder said copies of the slides were provided at the briefing, a Labor headquarters official said the information is not available online for those who are unable to attend in person. Seven of 27 stakeholders indicated that alternative approaches, such as webinars or audioconferences, might be helpful ways to reach additional contractors. CIRPC officials said more outreach by Labor could improve the accuracy of the surveys because contractors would better understand why and how the surveys are conducted, thereby encouraging more to participate. They said they previously recommended that Labor wage analysts call contractors prior to survey distribution to make them aware of the survey and to assure them their submitted data would be protected. OMB guidance states that sending a letter in advance of a survey to alert respondents can improve response rates. A senior Labor official said they are conducting pre-survey briefings instead of calling respondents in advance. For more than a decade, reviews of the Davis-Bacon wage survey have highlighted methodological problems in the determination of wages paid to workers on federally funded construction projects. In response to those criticisms, Labor has improved its process, most recently seeking out new data sources for some construction types and adjusting the data collection and processing time frames. Yet without clear tracking of key survey dates and the time spent in various processing activities, Labor cannot assess if its changes are improving survey timeliness and thus the accuracy of published wage rates. Additionally, these efforts do not effectively address some key issues with how data are collected. Because Labor has not conducted checks over the past several years on the representativeness of the data it receives, it cannot have high confidence its results accurately reflect prevailing wages, no matter how diligently its staff work to clarify and verify submitted data. If the resultant prevailing wage rates are too high, they potentially cost the federal government and taxpayers more for publicly funded construction projects or, if too low, they cost workers in compensation. While Labor officials rightly used experience and corporate knowledge in designing recent changes to survey methodology, they did not enlist objective survey expertise to ensure methods were sound and in accordance with best practices. Survey methodology that does not follow best practices lowers confidence in the process and puts participation by private contractors at risk. Labor’s regulatory goal to issue wage rates at the county level may also limit its ability to improve survey representativeness and timeliness. Labor often must combine data from multiple counties to meet its own relatively low sufficiency standards to publish wage rates for specific job classifications which, in the end, may reflect the wages for as few as three employees from two contractors for an entire state. The statutory requirement to issue prevailing wages by “civil subdivision of the state” limits Labor’s ability to account for relevant regional markets that cross county or state boundaries or to tap into data based on other geographic groupings. Use of other data sources to augment Davis-Bacon survey data could shorten the time needed to publish wage rates and reduce the number of conformances that contractors must file for missing wage rates. Given the voluntary nature of the survey, participants who take the time to respond should have confidence their information will be considered in determining prevailing wages. They should also be able to understand how their information is used. Increased transparency in how the wage rates are calculated and improved clarity in published wage determinations would provide stakeholders assurance the wage rates are accurate and encourage greater participation of the construction employer community. To improve the quality of Labor’s Davis-Bacon wage survey data, Congress may wish to consider amending the language of the Davis-Bacon Act to allow Labor to use wage data from geographic groupings other than civil subdivisions of states, such as metropolitan statistical areas or Bureau of Economic Analysis’ economic areas. To improve the quality and timeliness of Labor’s Davis-Bacon wage surveys, we recommend that the Secretary of Labor direct the Wage and Hour Division to enlist the National Academies, or another independent statistical organization, to evaluate and provide objective advice on the survey, including its methods and design; the potential for conducting a sample survey instead of a census survey; the collection, processing, tracking, and analysis of data; and promotion of survey awareness. To improve the transparency of wage determinations while maintaining the confidentiality of specific survey respondents, we recommend that the Secretary of Labor direct the Wage and Hour Division to publicly provide additional information on the data used to calculate its Davis-Bacon wage rates, such as the number and wages of workers included in each wage rate calculation, and to clearly communicate the meaning of various dates and codes used in wage determinations in the same place the prevailing wage rates are posted. We provided a draft of this report to Labor for review and comment. The agency provided written comments, which are reproduced in appendix VI. Labor agreed with our recommendation to improve the transparency of the wage determinations and indicated it is taking steps to do so. However, the agency said our recommendation to obtain objective expert advice on its survey design and methodology may be premature because additional changes are currently being implemented or will be implemented based on a 2004 review of the program by McGraw-Hill Construction Analytics. The McGraw-Hill review was a process evaluation that assessed many aspects of the wage survey; however, Labor officials did not indicate during our interviews that the results of that evaluation were serving as the foundation for their recent changes nor was the evaluation referred to in documentation Labor provided regarding its recent changes. Moreover, the McGraw-Hill report did not address certain issues related to the survey’s design and methodology. Therefore, we continue to believe that Labor should have an independent statistical organization provide advice on survey methods for the following reasons: Labor cites examples of improvements to its processes and information technology systems so that surveys can be completed and published in a more timely manner. We also cited many of these data collection and processing changes in our report along with the agency’s expected reduction in processing times for highway, building, and heavy surveys. The survey timelines, which we used to assess whether surveys conducted under new processes were on schedule, were provided to us by the agency and included reductions in and elimination of various survey steps. Yet according to those agency timelines, many of the surveys were behind schedule. Labor commented it has reduced the time to publish survey results for building and heavy construction from several years to an average of 2 years. However, we believe it may face challenges staying on schedule if it cannot more accurately predict how many survey forms it will receive and the time required to process them. Possibilities to better predict the number of survey responses, such as statistical sampling rather than the current census survey, could be explored with survey experts. Labor also noted, as we did in our report, that it is again working to calculate response rates and we believe this is a step in the right direction. However, only calculating response rates will not ensure that the data Labor is using to calculate prevailing wages are truly representative of the wages being paid in a particular area. If a response rate is low—some wage rates are calculated on as few as three workers—then Labor must also analyze nonrespondents to ensure that those who received a survey but did not respond do not significantly differ from those who responded. Survey expertise could assist with this critical data quality check to help ensure prevailing wages are representative of wages actually paid to workers. Labor commented that the current survey form was not recently redesigned, but is a scannable version of the form that was last updated in 2004. We adjusted our report language accordingly. The agency also noted that errors on wage survey forms typically result from errors in the information provided by survey respondents rather than errors made by Wage and Hour Division employees. We agree; however, we believe the fact that respondents continue to make some of the same errors in completing the wage survey form that were identified by the Labor OIG in 2004 is a concern. Labor did not pretest the current form with survey respondents to ensure clarity, which could partially explain why contractors and interested parties made errors. A professional survey methodologist could develop a pretesting plan to address issues that affect the quality of the survey data, such as respondent comprehension, retrieval, judgment, and response formulation. We believe it is critical for Labor to obtain expert methodological advice because this would allow the agency to make course corrections before time and money are spent implementing new procedures that may increase the speed of processing data, but not sufficiently address its quality. While Labor indicated the cost of contracting for an expert review is a concern, not ensuring the quality and representativeness of the data can be costly in other ways: the federal government could pay more for construction than it needs to or workers may earn less than they should. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to the Secretary of Labor, relevant congressional committees, and other interested parties. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. Our review examined (1) the extent to which the Department of Labor (Labor) has addressed concerns regarding the quality of the Davis-Bacon wage determination process and (2) the additional issues identified by stakeholders regarding the wage determination process. To address these objectives, we reviewed key documents, including past GAO and Department of Labor Office of Inspector General (OIG) reviews of the program, agency documents on recent changes to the wage survey process, and relevant federal laws and regulations; interviewed agency officials and representatives from organizations with whom the agency contracts some aspects of the survey process; analyzed (1) data from Labor’s Automated Survey Data System (ASDS), Wage Determination Generation System (WDGS), and the Davis-Bacon survey schedule Web site (http://www.dol.gov/whd/programs/ dbra/schedule.htm); (2) reports produced by Labor’s contracted accounting firm for on-site verification of submitted payroll records; and (3) Labor’s conformance logs for fiscal years 2007 through 2009; conducted site visits to three of Labor’s five regional offices that conduct Davis-Bacon wage surveys, as well as to the Construction Industry Research and Policy Center (CIRPC), which is contracted to assist Labor with the wage survey process; interviewed approximately 30 stakeholders, including representatives from academia, contractor associations, and unions, as well as individual contractors and performed a content analysis of their comments; and attended a Labor prevailing wage conference. We conducted this performance audit from September 2009 through March 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To evaluate how Labor has addressed past concerns with the quality of the Davis-Bacon wage determination process, we reviewed past reports, reviewed key agency documents, and interviewed Labor officials. We reviewed two Labor OIG reports and their associated recommendations, as well as our own previous work. In addition, we reviewed agency correspondence with Congress and the Office of Management and Budget (OMB) describing Labor’s changes to the wage determination process based on past program audits, the effectiveness of those changes, and planned future changes. To assess recent changes made to the wage survey process and their expected outcomes, we interviewed officials and reviewed agency documents, such as the Davis-Bacon manual of operations and Labor’s revised timelines for building, heavy, and highway surveys starting in 2009. Using Labor’s revised timelines, we calculated the expected reduction in the amount of time from the start of each survey to publication of wage rates. To assess whether Labor’s surveys under the new processes were on schedule, we reviewed an ASDS Individual Time Tracking Report by Activity/Survey for October 1, 2009, through September 1, 2010, that provided the number of staff hours logged in each survey activity for the pilot building and heavy surveys and highway surveys started under the new processes. We then compared the last activity in which staff hours had been logged for each survey with its expected activity based on the date the regional office entered the survey into ASDS and Labor’s new timelines. We could not calculate the exact number of days surveys were ahead of or behind schedule because Labor did not have a report that reliably recorded the date a survey moved from one activity to the next. Additionally, for one state building and heavy survey and three state highway surveys, we could not calculate the actual timelines because the dates in the data provided by Labor were out of sequence. Labor officials provided inconsistent guidance on which activity in their timelines reflected the actual start of a survey; however, for various reasons, we used the date the survey was recorded as being entered into the database for our analysis of whether the surveys were on schedule. During our review, a senior Labor official indicated the appropriate survey start date was the date the survey was entered into ASDS by regional office officials. Toward the end of our review, the official indicated the correct start date was the date surveys were first mailed to contractors or interested parties because each region had its own method for when it entered surveys into ASDS. For example, some regions entered surveys when they planned them while others entered surveys when they ordered Dodge Reports. We believe using the date surveys were first mailed as the start date would exclude certain key activities on Labor’s survey timeline, such as ordering, receiving, and cleaning the Dodge Report data for building and heavy surveys and inputting interested party lists for highway surveys. Nonetheless, we conducted an additional timeliness analysis using alternative start dates based on Labor’s concerns. Given that Labor officials were concerned the regional offices may enter building and heavy surveys into ASDS before actually starting them, we used the date the Dodge data were requested, which is the second step in the new process. For the building and heavy surveys we reviewed, none of them changed status based on the alternative start date. In other words, all were still behind schedule. For highway surveys, we used the date surveys were first mailed to interested parties as the start date for the alternative analysis. For the highway surveys we reviewed, only one changed status from behind schedule to ahead of schedule. Therefore, based on the limited changes to our findings from using alternative start dates, as well as the fact that the alternative start dates exclude parts of the survey process on which Labor had been working to improve timeliness and for which staff had logged hours, we decided to conduct our analysis using the original date provided by Labor (the date the regional offices entered the survey into ASDS). To assess the adequacy of Labor’s current wage survey methodology we compared it with survey guidance published by OMB and Labor. We used data from ASDS to evaluate the geographic level at which rates were issued and the number of workers used to issue rates. For both analyses, we used data from four surveys—Florida 2005, Maryland 2005, Tennessee 2006, and West Texas Metropolitan 2006—that were issued in 2009 or 2010. We selected these surveys because they were recently published and represented geographic diversity, to the extent possible, in terms of the Labor regional offices that conducted the surveys. The data from the surveys we reviewed included the following construction types: Florida—building, heavy, highway, and residential; Maryland—building, heavy, and residential; Tennessee—building, heavy, highway, and residential; and West Texas Metropolitan—building and residential. The survey results included metropolitan and rural rates for all construction types with the exception of the Maryland heavy construction type and the West Texas survey, which only included metropolitan rates. To evaluate the geographic level at which wage rates were issued, we analyzed, for each survey in our review, the “calculation basis” field on Labor’s WD-22 form, which indicates whether the wage rate for each job classification was determined based on county-level data, multi-county data, or statewide data. We were unable to determine the geographic level for rates that had been combined in the final WD-22 so we reported them separately. Regional office officials said they may combine rates from counties with the exact same wage and fringe benefit data in the final WD- 22. However, the rates being combined may have been calculated at different geographic levels—for example, one county’s rates may have been calculated at the group level while another county’s rates may have been calculated at the supergroup level. Because the geographic level at which rates for each combined county were calculated is not reported on the WD-22, we reported the percentage of these rates separately. We analyzed geographic levels for key job classifications only because nonkey job classifications cannot be issued at the supergroup or state level. Key job classifications are those determined by Labor to be necessary for one or more of the four construction types, as follows: Building Construction: bricklayer, boilermaker, carpenter, cement mason, electrician, heat and frost insulators/asbestos workers/pipe insulators, iron worker, laborer-common, painter, pipefitter, plumber, power equipment operator, roofer, sheet metal worker, tile setter, and truck driver. Heavy Construction and Highway Construction: carpenter, cement mason, electrician, iron worker, laborer-common, painter, power equipment operator, and truck driver. Residential Construction: bricklayer, carpenter, cement mason, electrician, iron worker, laborer-common, painter, plumber, power equipment operator, roofer, sheet metal worker, and truck driver. Table 1 provides the percentage of wage rates issued at each geographic level by construction type and metropolitan or rural designation for the four surveys we reviewed. We also used WD-22 data to determine the number of workers used to calculate wage rates for all key job classifications for the four surveys in our review. Using the “total number reported” column in WD-22 reports, we calculated the number of workers whose wage rates were included in each wage rate calculation for key job classifications. We reported the data by quartiles with the exception of the “3 workers” category, which we broke out separately because it is the minimum number of workers for which Labor must receive data in order to issue a wage rate for a job classification. Table 2 provides the percentage of key job classification rates issued by number of workers, construction type, and metropolitan or rural designation for the four surveys we reviewed. Finally, we used WD-22 data to determine the percentage of wage rates that included federal data. We calculated this percentage for the building and residential construction types for the surveys in our review because Labor uses federal data for these construction types only when it has insufficient survey data, whereas federal data are used in all highway and heavy surveys. Table 3 provides the percentage of key job classification wage rates using federal data by construction type and metropolitan or rural designation for the four surveys we reviewed. To determine the age of wage rates, we used WDGS data on published wage rates provided by Labor officials on November 12, 2010. We analyzed the age of wage rates for building, heavy, and highway construction because Labor considered only those construction types in its fiscal year 2010 performance goal. We analyzed the age of wage rates in two ways: first, combining nonunion- and union-prevailing wage rates together, as Labor does, and then separately to identify any trends by type of rate. To determine the age of data used to calculate prevailing wage rates for the 22 open surveys that accumulated since Labor began conducting statewide surveys, we analyzed survey time frames and cutoff dates from Labor’s Davis-Bacon and Related Acts survey schedule Web site (http://www.dol.gov/whd/programs/dbra/schedule.htm) and interviewed Labor officials. To assess the number of wage survey forms, or WD-10s, that had errors and the types of errors that most commonly occurred, we analyzed on-site verification reports prepared by Labor’s contracted accounting firm for the four states in our review. We analyzed the verification reports to determine what percentage of wage survey forms that were verified had errors and what type of errors occurred. To identify and categorize the errors, we recorded if the accounting firm marked an error in the following fields: project value, construction type, additional trade/classification, employee classification, work performed, paid under collective bargaining agreement (CBA), number of employees, peak week, hourly rate, fringe benefit, health and welfare, pension, holiday and vacation, apprentice training, and other. We counted a wage survey form as having multiple errors if it had an error in more than one category. To determine the average number of conformance requests filed for missing classifications in fiscal years 2007 through 2009, we used the “tracking number” field in Labor’s conformance request log. We counted the number of requests with distinct tracking numbers, excluding entries that did not have tracking numbers, and then calculated the average over the 3-year period. To assess the reliability of the data we used in our analyses, we performed the following steps: (1) reviewed pertinent system and process documentation, (2) interviewed agency officials knowledgeable about the data and system during each regional office site visit, and (3) performed electronic testing of required data fields. We found the data we reviewed to be reliable for our purposes. To obtain information from staff who clarify and analyze survey information, we conducted site visits to three of the five Labor regional offices that process Davis-Bacon wage surveys—Northeast region (Philadelphia, Pennsylvania); Southeast region (Atlanta, Georgia); and Southwest region (Dallas, Texas)—as well as CIRPC at the University of Tennessee. At each regional office, we interviewed the director of enforcement, the regional wage specialist, the senior wage analyst, and wage analysts. At CIRPC, we interviewed the associate directors, the senior wage analyst equivalent, and wage analysts. Also, to gain a thorough understanding of how wage analysts process survey data and document decisions, we interviewed staff at each regional office about ASDS. We selected our site visit locations based on the fact that Labor headquarters officials said these regional offices were currently conducting surveys using new processes. Additionally, we visited CIRPC to determine how contractors are selected for survey participation and on- site verification, and how CIRPC provides support to the regional offices in implementing the new survey processes. To determine what additional issues stakeholders may have with the wage determination process, we initially explored surveying contractors and union officials in states where Labor had recently conducted a wage survey. We believed it was important for us to survey contractors who had recently received a wage survey from Labor so they could recall their experience of responding to the wage survey or their reasons for not responding. However, Labor officials had concerns about us surveying contractors in states where Labor had completed wage survey data collection, but was still in the process of contacting contractors for data clarification and verification. Labor officials believed contractors might get confused if they received requests for information from more than one agency and were concerned our activities might affect their efforts. We agreed with these concerns. Therefore, instead of surveying contractors, we opted to conduct semi-structured interviews with a wide variety of Davis-Bacon stakeholders. Also, in order to solicit opinions directly from contractors but not interfere with Labor’s ongoing efforts, we interviewed a small number of individual contractors in states that had been surveyed less recently but where the results of those wage surveys had been published. Given that it had been a few years since Labor sent wage survey forms to these contractors, we believed we would obtain better information through personal interviews than a survey. We conducted semi-structured interviews with approximately 30 representatives from academia, contractor associations, unions, and individual contractors. Our semi-structured interview protocol allowed us to ask questions of numerous organizations and individuals, offering each interviewee the opportunity to respond to the same general set of questions, but also allowed for flexibility in asking follow-up questions and, in limited circumstances, for the omission of questions when appropriate. For example, we did not ask representatives from academia about filling out the survey form or attending pre-survey briefings because they would typically not be involved in these activities. In our findings, we noted cases in which we did not ask all stakeholders a particular question. To select representatives from academia, we conducted a literature review to identify studies that reviewed or evaluated the Davis-Bacon wage survey process. To obtain opinions from both unionized and nonunionized contractors, we interviewed representatives from the national organizations of the Associated Builders and Contractors, Inc. (ABC) and the Associated General Contractors of America (AGC). To obtain views from construction unions, we interviewed representatives from the AFL- CIO and the International Brotherhood of Electrical Workers (IBEW). We selected IBEW because it has one of the largest memberships among construction industry unions and electricians are considered a key class for all four of Labor’s construction types. To obtain a state-level perspective from contractors’ associations and unions, we interviewed representatives from state ABC and AGC chapters, as well as IBEW locals in Arizona, North Carolina, and West Virginia. We chose these three states because they had been surveyed in 2009 or 2010 by different Labor regional offices. In addition, because Arizona, North Carolina, and West Virginia have low to medium levels of workers represented by unions, according to the Bureau of Labor Statistics (BLS), we interviewed representatives from ABC and AGC in New York, the state with the highest level of unionization. We also interviewed individual contractors in New York and Florida. We chose New York and Florida because they had been surveyed fairly recently and represented diversity in geography and the percentage of all workers represented by unions. To select contractors, we requested Labor data including the lists of contacts who had been sent wage survey forms and who had returned them. Then, to the extent possible, we matched the data using the contact identification field to determine which contacts had responded or not responded. In each state, we identified the counties with the highest number of respondents because there were fewer respondents than nonrespondents. We selected certain ZIP codes within each selected county based on the highest concentration of respondents, as well as site visit logistics. We then ordered the list of respondents and nonrespondents by ZIP code and called contractors asking them to meet with us. If we were unable to reach a contractor or if a contractor declined, we moved to the next contractor on the list and continued until we had a mix of respondents and nonrespondents who agreed to be interviewed. We conducted a content analysis on the information gathered through the stakeholder interviews. Interview responses and comments were categorized by an analyst to identify common themes. A pretest of the themes was reviewed by the engagement’s methodologist before all comments were categorized. The categorization of the comments was then independently checked, and agreed upon, by another analyst for verification purposes. While we selected our stakeholders to include a wide variety of positions, the opinions expressed are specific to those we interviewed and are not generalizable. We attended Davis-Bacon-related sessions of Labor’s November 2010 prevailing wage conference in Cleveland, Ohio, to obtain additional stakeholder perspectives on the wage determination process and use of published wage determinations through observation of Labor’s presentations and question and answer sessions. II: Wage Survey Form (WD-10) The Davis-Bacon Act requires that workers employed on federal construction contracts valued in excess of $2,000 be paid, at a minimum, wages and fringe benefits that the Secretary of Labor determines to be prevailing for corresponding classes of workers employed on public and private projects that are similar in character to the contract work in the civil subdivision of the state where the construction takes place. To determine the prevailing wages and fringe benefits in various areas throughout the United States, Labor’s Wage and Hour Division periodically surveys wages and fringe benefits paid to workers in four basic types of construction: building, residential, highway, and heavy. 2, 3 Labor collects data through statewide surveys, except in large states, such as Texas and California. Labor’s regulations state that the county will normally be the civil subdivision at which a prevailing wage is determined, although Labor may consider wages paid on similar construction in surrounding counties if it is determined there has not been sufficient similar construction activity within the given area in the past year. Data from projects in metropolitan counties are considered separately from those in rural counties. If similar construction in surrounding counties, or in the state, is not sufficient, Labor may consider wages paid on projects completed more than 1 year prior to the start of a survey. Wage rates are issued for a series of job classifications in each of the four basic types of construction, so each wage determination requires the calculation of prevailing wages for many different trades, such as electrician, plumber, and carpenter. Labor’s wage determination process consists of five basic stages: 1. Planning and scheduling surveys to collect data on wages and fringe benefits in similar job classifications on comparable construction projects. The process described here is based on Labor regulations, procedures manuals and documents, and statements by officials. GAO did not verify whether all procedures were followed in all cases. Heavy construction is a catch-all grouping that includes projects not properly classified under the other three types of construction; for example, dredging and sewer projects. 2. Conducting surveys of employers and interested parties, such as representatives of unions or contractor associations. 3. Clarifying and analyzing respondents’ data. 4. Issuing the wage determinations. 5. Reconsideration and review of wage determinations through an appeals process. Labor attempts to survey the complete “universe” of relevant construction contractors active within a particular area during a specific period of time. Labor schedules surveys by identifying those areas and construction types most in need of a survey, based on criteria that include age of the most recent survey; volume of federal construction in the area; requests or complaints from interested parties, such as state and county agencies, unions, and contractor associations; and evidence that wage rates in a region have changed. Labor uses two management tools, the Regional Survey Planning Report and the Uniform Survey Planning Procedure, to help prioritize planned surveys. The Regional Survey Planning Report is provided by CIRPC at the University of Tennessee and contains information about construction activity nationwide, including the number and value of active projects, the number and value of federally owned projects, the date of the most recent survey in each county, and whether the existing wage determinations for each county are union-prevailing, nonunion-prevailing, or a combination of both. Labor uses the Uniform Survey Planning Procedure to weigh the need for surveys by area and construction type. Once Labor designates an area and construction type (i.e., building, residential, highway, or heavy) for a survey, it proposes a survey time frame, or reference period during which the construction projects considered in the survey must be “active.” Generally, the preliminary time frame is the preceding 12-month period, the survey start date is approximately 3 months after the survey is assigned, and the survey cutoff date is 4 to 6 months from the start date, depending on the size of the survey. However, the survey time frame, start date, and cutoff date may be shortened or lengthened based on individual circumstances of the survey. Once these parameters are established, Labor enters the survey information into ASDS. To identify projects that meet the established survey criteria (the designated area, construction type, and survey time frame), Labor uses F.W. Dodge data produced in reports known as Dodge Reports. Labor supplements these data with information provided by contractors listed in the Dodge Reports, by industry associations, and from regional office files to find additional relevant construction projects. Analysts at CIRPC screen the data to ensure projects selected meet the criteria before the survey begins. Projects must be of the correct construction type, be in the correct geographic area, fall within the survey time frame, and have a value of at least $2,000. CIRPC also checks for duplicate project information to minimize contacts to a contractor working on multiple projects that meet survey criteria. Labor notifies contractors and interested parties—including contractor associations, unions, government agencies, and Members of Congress—of upcoming surveys by posting survey information on its Web site, sending letters, and conducting pre-survey briefings. Contractor and interested party records are sent to the U.S. Census Bureau, which distributes the notification letters encouraging participation in the survey. Labor’s regional offices arrange pre-survey briefings with interested parties prior to or at the start of a survey to clarify survey procedures and provide information on how to complete and submit wage survey forms, known as WD-10s. Data requested on the WD-10 form include a description of the project and its location; the contractor’s name and address; the project value and start and end dates; the wage rate and fringe benefits paid to each worker on the project; and the number of workers employed in each job classification during the week of peak activity for that classification. The peak week for each job classification is the week when the most workers were employed in that particular classification. For an example of how Labor collects peak week data on a WD-10, see appendix II. The Census Bureau conducts four mailings throughout a survey. The first mailing includes letters and WD-10 wage survey forms to general contractors and interested parties. (For examples of survey announcement letters sent to contractors and interested parties, see app. IV.) General contractors listed on the Dodge Reports receive WD-10 forms with project names identified through the Dodge Reports, as well as additional blank forms for other projects. General contractors not listed on the Dodge Reports and interested parties receive a limited number of blank WD-10 forms, but additional forms are available upon request. In addition, all general contractors receive forms to provide information on subcontractors who worked on projects being surveyed. Members of Congress receive one blank WD-10 form and are not contacted again unless a survey is extended. The second mailing is only to general contractors who do not respond to the first mailing and includes the WD- 10 forms with project names from the Dodge Report and subcontractor list forms provided in the first mailing. The third mailing is to all reported subcontractors and newly reported general contractors and includes WD- 10 forms with project names and blank WD-10 forms. The fourth and final mailing is to all subcontractors who do not respond and newly reported subcontractors and only includes WD-10 forms. Survey respondents may submit paper WD-10 forms or complete forms electronically on Labor’s Web site. Census scans returned paper WD-10 forms into Labor’s ASDS. WD-10 forms submitted electronically are loaded directly into ASDS. Any additional information submitted must be entered into ASDS manually. CIRPC reviews the completed WD-10s, matches submitted information with the associated project, and forwards the WD- 10s to Labor regional offices. Labor’s wage analysts begin to review and analyze the data as they receive the completed WD-10s. Wage analysts’ first step in the review process is to determine whether the project reported on the WD-10 form is within the scope of the survey, or “usable.” Since the WD-10 forms may provide more information about a project than the Dodge Report, wage analysts review the data to determine whether the project meets the four basic survey criteria (correct construction type, geographic area, time frame, and project value). If a project does not meet the four criteria, it is determined unusable and any associated WD-10 forms are excluded from the survey. Once Labor has determined a project and WD-10 form are usable, wage analysts call contractors to clarify any information that is unclear or incomplete. Wage analysts record information about the clarification call in ASDS, including the date and name of the person contacted and any information that resulted in changes to the WD-10 form. Wage analysts review each section of the WD-10 forms and clarify the information, as necessary. Specifically, the analysts verify contractor and subcontractor information; project name, description, and location; whether the project received federal or state funding; start and end dates and value of the project; type of construction (i.e., building, residential, highway, or heavy); employee job classifications; the peak week ending date; the number of employees reported; the basic hourly rate; fringe benefits rates; and whether the wages were paid under a CBA, among other data. In addition to contractors, interested parties may also submit WD-10 forms for a project. However, Labor clarifies submitted data with the relevant contractor, regardless of the source, and excludes information provided by an interested party if it duplicates data provided by the contractor unless data are submitted on specific job classifications that were not included by the contractor. Labor also verifies rates paid under a CBA, or union rates, to ensure they are accurately reported. Similarly, because of variations in industry practices across the country, known as “area practice,” wage analysts may call contractors to clarify the type of work employees in certain job classifications are actually performing. This is necessary because, for a given prevailing wage, the scope of work covered by the job classification must reflect the actual prevailing area practice. An area practice issue exists when the same work is performed by employees in more than one classification in a given location. For example, a worker under the general electrician classification may perform tasks in addition to general electrical work, such as alarm installation and low voltage wiring. If there is another specialty classification installing alarms in the same location, it may indicate an area practice issue. In some geographic areas, particular work may be performed frequently and widely enough by a specialty classification such that the traditional practice by the general classification may be replaced by the practice of the specialty classification. Labor conducts several processes to verify data submitted in a survey. For data submitted by interested parties and contractors, Labor’s regional offices verify a random sample of data. To verify reported data, regional offices contact selected contractors and third parties to request payroll documentation, though data provided without documentation may still be used. In addition to remote verification of randomly selected contractors, on-site verification of a weighted sample of contractors is conducted. The on-site verification selection is designed to include those contractors with the biggest impact on the prevailing wage rate for each job classification. Once the weighted sample of contractors has been selected, an independent auditing firm contracted by Labor arranges an appointment with each contractor to meet and review supporting records. The auditing firm prepares and submits a report documenting the differences between the submitted and verified information, including differences in project and wage data. Wage analysts in Labor’s regional offices update information in ASDS that may have changed as a result of these verification processes. In addition to wage data collected on WD-10 forms, Labor uses certified payroll data from projects that receive federal funding and meet survey criteria. For surveys of highway and heavy construction projects, Labor always uses certified payroll data, while it is only included in building and residential surveys if the submitted WD-10 forms do not provide enough information to make a wage determination. In addition, for highway surveys only, Labor sometimes adopts rates published by state departments of transportation if a state has conducted its own prevailing wage survey and data collected separately by Labor support the prevailing wage rates established by the state. Labor also updates union-prevailing wage rates when unions submit updated CBAs to Labor headquarters. Once all verified and corrected data have been entered into ASDS, Labor calculates the prevailing wage rate for each job classification in a survey. If a majority of workers (more than 50 percent) in a job classification are paid the same rate, that rate is determined the prevailing wage. If the same rate is not paid to a majority (over 50 percent) of workers in a job classification, the prevailing wage is the average wage rate weighted by the number of employees for which that rate was reported. Prevailing fringe benefits are determined only if a majority of the workers in a job classification receive fringe benefits. Once that condition is met, the prevailing fringe benefit is calculated for each job classification similarly to the way the prevailing wage rate is calculated. The prevailing rates resulting from the calculations will be either “union-prevailing”—if a majority of workers is paid under a CBA—or “nonunion-prevailing” rates. A prevailing wage rate for a job classification is only issued if there are sufficient data to make a determination. For data to be sufficient, Labor must receive wage information on at least three employees from at least two contractors for that job classification. If Labor receives sufficient data based on information collected at the county level for a job classification, a prevailing wage rate is determined using data from a single county. If data are insufficient at the county level, Labor includes data from federal projects in that county. If data are still insufficient, Labor includes data from contiguous counties, combined in “groups” or “supergroups” of counties, until data are sufficient to make a prevailing wage determination. Expansion to include other counties, if necessary, may continue until data from all counties in the state are combined. However, Labor’s regulations require wage data from projects in metropolitan and rural counties be separated when determining prevailing wages. For metropolitan counties, data are combined with data from one or more counties within the metropolitan statistical area, while data from rural counties are combined with data from other rural counties. Once the prevailing wage rates have been calculated, the regional offices transmit survey results to headquarters for final review. Labor headquarters issues wage determinations after reviewing recommended wage rates submitted by the regional offices. The prevailing wage rates are transmitted electronically to the WDGS for publication online at www.wdol.gov, where they are publicly available. Labor sometimes modifies wage determinations to keep them current or correct errors. Generally, modifications affect a limited number of job classifications within a wage determination. If a prevailing wage rate is not provided for a specific job classification in a wage determination, a contractor may request a rate for that classification, known as a conformance, through the contracting agency overseeing the specific project. The rate determined in the conformance process only applies to workers in that classification for the contract in question. Any interested party may request reconsideration and review of Labor’s wage determinations. The regional offices accept initial inquiries after a wage determination has been issued. Any interested party may request reconsideration from headquarters in writing and include any relevant information, such as wage payment data or project descriptions, to assist with the review. Labor’s regulations state that the Wage and Hour Division Administrator will generally respond within 30 days of receipt of the request. If the interested party’s request for reconsideration is denied, the interested party may file an appeal with Labor’s Administrative Review Board, which consists of three members appointed by the Secretary of Labor. All decisions by the Administrative Review Board are final. Any new wage determination resulting from such an appeal must be issued prior to the award of the contract in question, or before the start of construction if there is no award. In addition to the contact named above, the following staff made key contributions to this report: Gretta L. Goodwin, Assistant Director and Amy Anderson, analyst-in-charge, managed all aspects of this assignment; and Brenna Guarneros, analyst, made significant contributions to all phases of the work. In addition, John J. Barrett, analyst, made significant contributions to design and data collection; Christopher Zbrozek, intern, assisted in data collection and analysis; Walter Vance, Melinda Cordero, and Carl Barden provided assistance in designing the study and conducting data analysis; Susan Aschoff assisted in message and report development; Mimi Nguyen created the report’s graphics; Alexander Galuten provided legal advice; Erin Godtland, Barbara Steel-Lowney, and Yunsian Tai referenced the report; and Roshni Dave, Ronald Fecso, Kim Frankena, Mark Gaffigan, Charles A. Jeszeck, David Marroni, Mary Mohiyuddin, Stuart Ryba, David Wise, and William Woods provided guidance. | Procedures for determining Davis-Bacon prevailing wage rates, which must be paid to workers on certain federally funded construction projects, and their vulnerability to the use of inaccurate data have long been an issue for Congress, employers, and workers. In this report, GAO examined (1) the extent to which the Department of Labor (Labor) has addressed concerns regarding the quality of the Davis-Bacon wage determination process, and (2) additional issues identified by stakeholders regarding the wage determination process. GAO interviewed Labor officials, representatives from contractor associations and unions, contractors, and researchers; conducted site visits to three Labor regional offices; and analyzed data from Labor's wage survey database. Recent efforts to improve the Davis-Bacon wage survey have not addressed key issues with timeliness, representativeness, and the utility of using the county as the basis for the wage calculation. Labor has made some data collection and processing changes; however, we found some surveys initiated under the new processes were behind Labor's processing schedule. Labor did not consult survey design experts, and some criticisms of the survey and wage determination process have not been addressed, including the representativeness and sufficiency of the data collected. For example, Labor cannot determine whether its wage determinations accurately reflect prevailing wages because it does not currently calculate response rates or analyze survey nonrespondents. And, while Labor is required by law to issue wage rates by the "civil subdivision of the state," the goal to issue them at the county level is often not met because of insufficient survey response. In the published results for the four surveys in our review, Labor issued about 11 percent of wage rates for key job classifications (types of workers needed for one or more of Labor's construction types) using data from a single county. The rest were issued at the multi-county or state level. Over one-quarter of the wage rates were based on six or fewer workers. Little incentive to participate in Labor's Davis-Bacon wage surveys and a lack of transparency in the survey process remain key issues for stakeholders. Stakeholders said contractors may not participate because they lack resources, may not understand the purpose of the survey, or may not see the point in responding because they believe the prevailing wages issued by Labor are inaccurate. While most stakeholders said the survey form was generally easy to understand, some identified challenges with completing specific sections. Our review of reports by Labor's contracted auditor for four published surveys found most survey forms verified against payroll data had errors in areas such as number of employees and hourly and fringe benefit rates. Both contractor association and union officials said addressing a lack of transparency in how the published wage rates are set could result in a better understanding of the process and greater participation in the survey. GAO suggests Congress consider amending its requirement that Labor issue wage rates by civil subdivision to allow more flexibility. To improve the quality and timeliness of the Davis-Bacon wage surveys, GAO recommends Labor obtain objective expert advice on its survey design and methodology. GAO also recommends Labor take steps to improve the transparency of its wage determinations. Labor agreed with the second recommendation, but said obtaining expert survey advice may be premature given ongoing changes. We believe obtaining expert advice is critical for improving the quality of wage determinations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Operating barriers continue to limit competition and contribute to higher airfares in several key markets in the upper Midwest and East. In some cases, these barriers have grown worse. As a result, our October 1996 report recommended that DOT take actions that we originally suggested in 1990 and highlighted areas for potential congressional action. The report specifically addressed the effects of slots, perimeter rules, exclusive-use gate leases, and marketing strategies developed by the established airlines since airline deregulation. To reduce congestion, FAA has since 1969 limited the number of takeoffs and landings that can occur at O’Hare, National, LaGuardia, and Kennedy. By allowing new airlines to form and established airlines to enter new markets, deregulation increased the demand for access to these airports. Such increased demand complicated FAA’s efforts to allocate takeoff and landing slots equitably among the airlines. To minimize the government’s role in the allocation of slots, DOT amended its rules in 1985 to allow airlines to buy and sell them to one another. Under this “Buy/Sell Rule,” DOT grandfathered slots to the holders of record as of December 16, 1985. Emphasizing that it still owned the slots, however, DOT randomly assigned each slot a priority number and reserved the right to withdraw slots from the incumbents at any time. In August 1990, we reported that a few established carriers had built upon the favorable positions they inherited as a result of grandfathering to such an extent that they could limit access to routes beginning or ending at any of the slot-controlled airports. In October 1996, we reported that this level of control over slots by a few established airlines had increased even further (see app. I). As a result, little new entry has occurred at these airports, which are crucial to establishing new service in the heavily traveled eastern and midwestern markets. Recognizing the need for new entry at the slot-controlled airports, the Congress in 1994 created an exemption provision to allow for entry at O’Hare, LaGuardia, and Kennedy in cases where DOT “finds it to be in the public interest and the circumstances to be exceptional.” However, the exemption authority, which in effect allows DOT to issue new slots, has resulted in little new entry because DOT has interpreted the “exceptional circumstances” criterion very narrowly. DOT has only approved applications to provide service in markets not receiving nonstop service, even if the new service would result in substantial competitive benefits. We found no congressional guidance, however, to support this interpretation. As a result, we suggested in our October 1996 report that the Congress may wish to revise the extraordinary circumstance provision so that consideration of competitive benefits is a key criterion. Nevertheless, we indicated that action by the Congress would be needed only if DOT did not act. In our 1990 report, we had suggested several options to DOT aimed at promoting entry at the slot-controlled airports. These options included keeping the Buy/Sell Rule but periodically withdrawing a portion of slots that were grandfathered to the major incumbents and reallocating them by lottery. Because DOT had not acted on any of our suggestions and the situation had continued to worsen, we recommended in our October 1996 report that DOT hold periodic slot lotteries. At LaGuardia and National airports, perimeter rules prohibit incoming and outgoing flights that exceed 1,500 and 1,250 miles, respectively. The perimeter rules are designed to promote Kennedy and Dulles airports as the long-haul airports for the New York and Washington metropolitan areas. However, the rules limit the ability of airlines based in the West to compete because those airlines are not allowed to serve LaGuardia and National—airports that are generally preferred by more lucrative business travelers—from markets where they are strongest. For example, the rules keep the second largest airline started after deregulation—America West—from serving those airports from its hub in Phoenix. By contrast, because of their proximity to LaGuardia and National, each of the seven largest established carriers is able to serve those airports from their principal hubs. While the limit at LaGuardia was established by the Port Authority of New York & New Jersey, National’s perimeter rule is federal law. Thus, we suggested that the Congress consider granting DOT the authority to allow exemptions to the perimeter rule at National when proposed service will substantially increase competition. We did not recommend that the rule be abolished because removing it could have unintended negative consequences, such as reducing the amount of service to smaller communities in the Northeast and Southeast. This could happen if major slot holders at National shift their service from smaller communities to take advantage of more profitable, longer-haul routes. As a result, we concluded that a more prudent course to increasing competition at National would be to examine proposed new services on a case by case basis. Opportunities for establishing new or expanded service also continue to be limited at other airports by restrictive gate leases. These leases permit an airline exclusive rights to use most of an airport’s gates over a long period of time, commonly 20 years. Such long-term, exclusive-use gate leases prevent nonincumbents from securing necessary airport facilities on equal terms with incumbent airlines. To gain access to an airport in which most gates are exclusively leased, a nonincumbent must sublet gates from the incumbent airlines—often at non-preferred times and at a higher cost than the incumbent. Since our 1990 report, some airports, such as Los Angeles International, have attempted to regain more control of their facilities by signing less restrictive, shorter-term leases once the exclusive-use leases expired. Nevertheless, our 1996 report identified several airports in which entry was limited because most of the gates were under long-term, exclusive use leases with one airline. Although the development, maintenance, and expansion of airport facilities is essentially a local responsibility, most airports are operated under federal restrictions that are tied to the receipt of federal grant money from FAA. In our 1990 report, we suggested that one way to alleviate the barrier created by exclusive-use gate leases would be for FAA to add a grant restriction that ensures that some gates at an airport would be available to nonincumbents. Because many airports have taken steps since then to sign less restrictive gate leases, we concluded in our 1996 report that such a broad grant restriction was not necessary. However, to address the remaining problem areas, we recommended that when disbursing airport improvement grant monies, FAA give priority to those airports that do not lease the vast majority of their gates to one airline under long-term, exclusive-use terms. Figure 1 shows the six gate-constrained airports that we identified and the four slot-controlled airports. All of them are located in the East or upper Midwest, and as a result, affect competition throughout those regions. In 1995, these 10 airports accounted for approximately 22 percent of the nation’s 517 million scheduled passenger enplanements. Even where airport access is not a problem, airlines sometimes choose not to enter new markets because certain marketing strategies of incumbent airlines make it extremely difficult for them to attract traffic. Taken together, these strategies have created strong loyalties among passengers and travel agents and have made it much more difficult for competing airlines to enter new markets. In particular, they deter new as well as established airlines from entering those markets where an established airline is dominant. Two strategies in particular—booking incentives for travel agents and frequent flier plans—are targeted at business flyers, who represent the most profitable segment of the industry, and encourage them to use the dominant carrier in each market. Because about 90 percent of business travel is booked through travel agencies, airlines strive to influence the agencies’ booking patterns by offering special bonus commissions as a reward for booking a targeted proportion of passengers on their airline. Our discussions with representatives of the nation’s largest travel agencies confirmed the importance of these booking incentives. For example, a senior travel agency executive told us that when one established airline attempted to enter a number of markets dominated by another established airline, the nonincumbent complained that the travel agency was not booking passengers on its flights in those markets. The travel agency, according to the executive, told the nonincumbent that it could not support it in those markets because the agency had an incentive agreement with the incumbent airline involving those markets. As a result, the nonincumbent later pulled out of those markets. Similarly, frequent flier programs solidify the dominant carrier’s position in a market. Since their inception in the early 1980s, these programs have become an increasingly effective tool to encourage customers’ loyalty to a particular airline. The travel agencies with whom we spoke noted that business travelers often request to fly only on the airline with which they have a frequent flier account. As such, entry by new and established airlines alike into a market dominated by one carrier is very difficult, particularly since a potential entrant must announce its schedule and fares well in advance of beginning service, thus giving the incumbent an opportunity to adjust its marketing strategies. In many cases, we found that airlines have chosen not to enter, or quickly exit, markets where they do not believe they can overcome the combined effect of booking incentives and frequent flier programs and attract a sufficient amount of business traffic. In our 1996 report, we found that the effect of these marketing strategies tends to be the greatest—and fares the highest—in markets where the dominant carrier’s position is protected by operating barriers. Overall, fares were 31 percent higher in 1995 at the 10 airports affected by the operating barriers than at the other 33 airports that comprise FAA’s large hub classification. Moreover, the highest fares were at Charlotte, Cincinnati, Pittsburgh, and Minneapolis—markets where a single airline accounts for over 75 percent of passengers and operating barriers persist. However, we also noted that the marketing strategies produced consumer benefits, such as free frequent flier trips, and concluded that short of an outright ban, few policy options existed that would mitigate the marketing strategies’ negative impact on new entry. In its January 1997 response to our report, DOT stated that it shared our concerns that barriers to entry limit competition in the airline industry. The agency indicated that it would include competitive benefits as a factor when determining whether to grant slots to new entrants under the exceptional circumstances criterion. While this is a positive step, additional action will likely be needed because the number of new slots that DOT can grant is very limited. Recognizing this, DOT committed to giving careful consideration to our recommendation that it hold periodic slot lotteries. DOT also agreed with our position that action may be needed at some airports to ensure that nonincumbents are able to obtain competitive access to gates. However, DOT did not concur with our recommendation that FAA make an airport’s efforts to have gates available to nonincumbents a factor in its decisions on awarding federal grants to airports. According to DOT, the number of airports that we identified as presenting gate access problems is sufficiently small that the agency would prefer to address those problems on a case by case basis. The agency emphasized that in cases where incumbent airlines are alleged to have used their contractual arrangements with local airport authorities to block new entry, the agency will investigate to determine whether the behavior constituted an unfair or deceptive practice or an unfair method of competition. If so, the agency noted, it will take appropriate action. Finally, DOT expressed concern about potentially overly aggressive attempts by some established carriers to thwart new entry. According to DOT officials, since our report, several smaller carriers have complained to DOT that larger carriers are employing anticompetitive practices, such as predatory pricing—the practice of setting fares below marginal cost in an effort to drive competitors out of markets. According to DOT officials, the agency has expressed its concern to the established carriers involved and has notified them that it is investigating the allegations. Mr. Chairman, this concludes our prepared statement. We would be glad to respond to any questions that you or any member of the Subcommittee may have. Airline Deregulation: Barriers to Entry Continue to Limit Competition in Several Key Domestic Markets (GAO/RCED-97-4, Oct. 18, 1996). Changes in Airfares, Service, and Safety Since Airline Deregulation (GAO/T-RCED-96-126, Apr. 25, 1996). Airline Deregulation: Changes in Airfares, Service, and Safety at Small, Medium-Sized, and Large Communities (GAO/RCED-96-79, Apr. 19, 1996). Airline Competition: Essential Air Service Slots at O’Hare International Airport (GAO/RCED-94-118FS, Mar. 4, 1994). Airline Competition: Higher Fares and Less Competition Continue at Concentrated Airports (GAO/RCED-93-171, July 15, 1993). Airline Competition: Options for Addressing Financial and Competition Problems, Testimony Before the National Commission to Ensure a Strong Competitive Airline Industry (GAO/T-RCED-93-52, June 1, 1993). Computer Reservation Systems: Action Needed to Better Monitor the CRS Industry and Eliminate CRS Biases (GAO/RCED-92-130, Mar. 20, 1992). Airline Competition: Effects of Airline Market Concentration and Barriers to Entry on Airfares (GAO/RCED-91-101, Apr. 26, 1991). Airline Competition: Weak Financial Structure Threatens Competition (GAO/RCED-91-110, Apr. 15, 1991). Airline Competition: Fares and Concentration at Small-City Airports (GAO/RCED-91-51, Jan. 18, 1991). Airline Deregulation: Trends in Airfares at Airports in Small and Medium-Sized Communities (GAO/RCED-91-13, Nov. 8, 1990). Airline Competition: Industry Operating and Marketing Practices Limit Market Entry (GAO/RCED-90-147, Aug. 29, 1990). Airline Competition: Higher Fares and Reduced Competition at Concentrated Airports (GAO/RCED-90-102, July 11, 1990). Airline Deregulation: Barriers to Competition in the Airline Industry (GAO/T-RCED-89-65, Sept. 20, 1989). Airline Competition: DOT’s Implementation of Airline Regulatory Authority (GAO/RCED-89-93, June 28, 1989). Airline Service: Changes at Major Montana Airports Since Deregulation (GAO/RCED-89-141FS, May 24, 1989). Airline Competition: Fare and Service Changes at St. Louis Since the TWA-Ozark Merger (GAO/RCED-88-217BR, Sept. 21, 1988). Competition in the Airline Computerized Reservation Systems (GAO/T-RCED-88-62, Sept. 14, 1988). Airline Competition: Impact of Computerized Reservation Systems (GAO/RCED-86-74, May 9, 1986). Airline Takeoff and Landing Slots: Department of Transportation’s Slot Allocation Rule (GAO/RCED-86-92, Jan. 31, 1986). Deregulation: Increased Competition Is Making Airlines More Efficient and Responsive to Consumers (GAO/RCED-86-26, Nov. 6, 1985). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed competition in the domestic airline industry, focusing on: (1) barriers to entry in the airline industry; and (2) the Department of Transportation's (DOT) response to the recommendations in GAO's October 1996 report. GAO noted that: (1) in its October 1996 report, GAO stated that little progress has been achieved in lowering the barriers to entry since GAO first reported on these barriers in 1990; (2) as a result, the full benefits of airline deregulation have yet to be realized; (3) in particular, operating limits in the form of slot controls, restrictive gate leasing arrangements, and perimeter rules continue to block entry at key airports in the East and upper Midwest; (4) several marketing strategies give advantages to the established carriers; (5) these strategies, taken together, continue to deter new as well as established airlines from entering those markets where an established airline is dominant; (6) these strategies' effect tends to be greatest, and airfares the highest, in markets where the dominate carrier's position is protected by operating barriers; (7) GAO recommended that DOT take actions that GAO had previously suggested in 1990 to lower the operating barriers; (8) moreover, GAO suggested that, absent action by DOT, Congress may wish to consider revising the legislative criteria that governs DOT's granting slots to new entrants and consider granting DOT the authority to allow exemptions to National Airport's perimeter rule to increase competition; (9) DOT concurred with GAO's recent findings and expressed concern about "overly aggressive" attempts by established airlines to thwart new entry; (10) to make it easier for new entrants to obtain slots, DOT indicated that it would revise its restrictive interpretation of the legislative criteria governing the granting of new slots; (11) while this is a positive step, additional action will likely be needed because the number of new slots that DOT can grant is very limited; (12) in its report, GAO also recommended that DOT create a pool of available slots by periodically withdrawing a small percentage from the major incumbents at each airport and distribute those slots in a fashion that increases competition; (13) DOT indicated that it is still considering this action; (14) DOT did not agree with GAO's recommendation that the Federal Aviation Administration consider an airport's efforts to make gates available to nonincumbents when making federal airport grant decisions; (15) DOT said that it would rather address this issue on a case by case basis as problems are brought to its attention; and (16) in light of the lack of progress over the past 7 years, however, GAO believes that its recommendations, combined with GAO's suggestions for potential congressional action, offer prudent steps to promote competition in regions that have not experienced the benefits of airline deregulation. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Throughout this century, railroads have been a primary mode of transportation for many products, especially for such bulk commodities as coal and grain. Yet, by the 1970s American freight railroads were in a serious financial decline. The Congress responded by passing landmark legislation in 1976 and 1980 that reduced rail regulation and encouraged a greater reliance on competition to set rates. Railroads also continued a series of combinations to reduce costs, increase efficiencies, and improve their financial health. In 1995, the Congress abolished the Interstate Commerce Commission (ICC)—the federal agency responsible for overseeing rates, competition, and service in the rail industry—and replaced it with the Surface Transportation Board (the Board). Rail shippers and others have expressed concern about the lack of competition in the railroad industry, the extent to which railroads are using their market power to set rates, and the quality of service provided, especially for those shippers with fewer alternatives to rail transportation to move their goods to market. They have also questioned whether the Board is adequately protecting shippers against unreasonable rates and service. By the 1970s, America’s railroads were in serious financial trouble. In a 1978 report to the Congress, the U.S. Department of Transportation (DOT) indicated that in 1976, 11 of 36 Class I railroads studied were earning negative rates of return on investment, and at least 3 railroads were in reorganization under the bankruptcy laws. Some of the railroads’ problems were due to federal regulation of rates that reduced management control and the flexibility railroads needed to react to changing market conditions. Prior to 1976, almost all rail rates were subject to ICC oversight to ensure they were reasonable. The Congress sought to improve the financial health of the rail industry by reducing railroad rate regulation and encouraging a greater reliance on competition to set reasonable rail rates. The Congress did so by passing two landmark pieces of legislation—the Railroad Revitalization and Regulatory Reform Act of 1976 (4R Act) and the Staggers Rail Act of 1980. The 4R Act limited the ICC’s authority to regulate rates to those instances where there was an absence of effective competition—that is, where a railroad is “market dominant.” Furthermore, the Staggers Rail Act made it federal policy to rely, where possible, on competition and the demand for rail services (called differential pricing) to establish reasonable rates. Among other things, this act also allowed railroads to market their services more effectively by negotiating transportation contracts (generally offering reduced rates in return for guaranteed volumes) containing confidential terms and conditions; limited collective rate setting to those railroads actually involved in a joint movement of goods; and permitted railroads to change their rates without challenge in accordance with a rail cost adjustment factor. Furthermore, both the 4R Act and the Staggers Rail Act required the ICC (now the Board) to exempt certain railroad transportation from economic regulation. The Staggers Rail Act required ICC to exempt railroad transportation from regulation upon finding that the regulation was not necessary to carry out the rail transportation policy and either (1) the transaction was of limited scope or (2) regulation was not needed to protect shippers from an abuse of market power. During the 1980s, railroads used their increased freedoms to improve their financial health and competitiveness. The railroad industry has continued to consolidate in the last 2 decades, a condition that has been occurring since the 19th century. In 1976, there were 30 independent Class I railroad systems (comprised of 63 Class I railroads); by early 1999, there were 9 railroad systems (comprised of 9 Class I railroads) and half of that reduction was due to consolidations.(See fig. 1.1.) The nine remaining Class I railroad systems are the Burlington Northern and Santa Fe Railway Co.; Consolidated Rail Corporation (Conrail); CSX Transportation, Inc.; Grand Trunk Western Railroad, Inc.; Illinois Central Railroad Co.; Kansas City Southern Railway Co.; Norfolk Southern Railroad Co.; Soo Line Railroad Co., and Union Pacific Railroad Co. In 1998, the Board approved the division of Conrail’s assets between CSX Transportation, Inc., and Norfolk Southern Corporation. Conrail is expected to be formally absorbed by CSX Transportation and Norfolk Southern in 1999, leaving a total of eight Class I railroad systems. Railroads consolidated to reduce costs and increase efficiencies, making them more competitive. For example, one of the justifications for the 1995 Burlington Northern-Santa Fe merger was to provide shippers with more efficient and cost-effective “single line” service. Both the Board and the railroads involved expected reduced costs and improved transit times because the railroad on which a shipment originated would no longer have to transfer the shipment to another railroad for routing to its final destination. Cost reductions and increased efficiencies were also expected from, among other things, rerouting of traffic over shorter routes, more efficient use of equipment, and increased traffic densities. Consolidations were also justified as providing competitive benefits—both within the rail industry and between railroads and other transportation modes. For example, the Board in its 1996 approval of the Union Pacific/Southern Pacific merger expected the merger would intensify rail competition in the West between Burlington Northern and Santa Fe Railway and the combined Union Pacific/Southern Pacific. The acquisition of Conrail by Norfolk Southern and CSX Transportation is expected to yield benefits—both by diverting substantial amounts of highway freight traffic to railroads and by introducing new railroad-to-railroad competition in those areas previously served only by Conrail. As Class I railroads consolidated, non-Class I railroads increased their importance in providing service. For example, in 1980, Kansas was served by seven Class I railroads (see fig. 1.2); in 1997, this number was three. Between 1991 and 1996, Class I railroads reduced their mileage operated in the state by about 1,400 miles while non-Class I carriers increased their mileage by about 1,700 miles (175 percent greater than in 1991). (App. I shows how Class I and non-Class I rail mileage changed in Montana, North Dakota, and West Virginia from 1980 to 1997.) In 1995, the Congress passed the ICC Termination Act of 1995, which abolished the ICC. The act transferred many of ICC’s core rail functions and certain nonrail functions to the Board, a decisionally independent adjudicatory agency that is administratively housed in DOT. Among other things, the Board approves market entry and exit of railroads; approves railroad mergers and consolidations; determines the adequacy of a railroad’s revenues on an annual basis; adjudicates complaints concerning rail rates on traffic over which a railroad has market dominance; adjudicates complaints alleging that carriers have failed to provide service upon reasonable request; and exempts railroad transportation from economic regulation under certain circumstances. The ICC Termination Act made several significant changes to railroad regulation. For example, the act eliminated the requirement for railroad tariff filings. However, the act did not alter railroads’ authority to engage in demand-based differential pricing or to negotiate transportation service contracts containing confidential terms and conditions that are beyond the Board’s authority while in effect. Several of the Board’s functions are particularly relevant to this report: the (1) responsibility for determining the adequacy of a railroad’s revenues, (2) jurisdiction over rail rate complaints, and (3) jurisdiction over complaints alleging that carriers have failed to provide service upon reasonable request. First, the Board is required to determine the adequacy of railroad revenues on an annual basis. In addition, the Board is required to make an adequate and continuing effort to assist railroads in attaining adequate revenues—that is, revenues that under honest, economical, and efficient management cover total operating expenses plus a reasonable and economic profit on capital employed in the business. Second, the Board is also responsible for protecting shippers without feasible transportation alternatives from unreasonably high rail rates. Where the Board concludes that a challenged rate is unreasonable, it may order the railroad to pay reparations on past shipments and prescribe maximum rates for future shipments. The Board does not have authority over rail rates for car movements made under contracts or for movements that it has exempted from economic regulation. Only about 18 percent of the tonnage moved in 1997 was subject to rate reasonableness regulation by the Board. The remainder was either moved under contract (70 percent), according to the Association of American Railroads (AAR),or was exempt from economic regulation (12 percent). Furthermore, rates on rail traffic priced below the 180-percent revenue-to-variable cost threshold are not subject to regulation by the Board. According to the Board, over 70 percent of all rail traffic in 1997 was priced below this threshold. Third, the Board has the authority to adjudicate service complaints filed by shippers. The Board’s process for handling formal service complaints, like its rate complaint process, is an administrative litigation process, in which parties to the dispute file pleadings, disclose and receive information from each other, and present evidence. If the Board decides a case in favor of the complainant, it can require the carrier to provide the shipper with monetary compensation or to adopt or stop a practice. Moreover, the Board is authorized to impose “competitive access” remedies, under which shippers can obtain access to an alternative carrier. However, to obtain permanent relief, the complaining shipper must demonstrate that the rail carrier currently providing the service (called the incumbent carrier) has engaged in anticompetitive conduct—that is, the carrier has used its market power to extract unreasonable terms or, because of its monopoly position, has disregarded the shipper’s needs by not providing adequate service. As discussed in chapter 5, the Board also has other procedures for providing temporary relief from service inadequacies without a showing of anticompetitive conduct where the carrier is not providing adequate service. The Board may also address service deficiencies through emergency service orders. The Board may issue an emergency service order if it determines that a failure of traffic movement has created an emergency situation that has a substantial impact on shippers or railroad service in a region or that a railroad cannot transport traffic in a manner that properly serves the public. Through emergency service orders, the Board may, among other things, permit the operation of one rail carrier over another carrier’s line to improve the flow of traffic. The Board may also direct a rail carrier to operate the lines of a carrier that has ceased operations. These arrangements may not exceed 270 days. Since 1990, the ICC and the Board have issued eight emergency service orders; prior to its termination, the ICC, in five of these instances, directed a carrier to operate the lines of another railroad. Senators Conrad Burns, Byron Dorgan, Pat Roberts, and John D. Rockefeller, IV, expressed concern that the continued consolidation within the rail industry has allowed railroads to charge unreasonably high rates and provide poor service. The Senators asked us to report on (1) how the environment within which rail rates are set has changed since 1990; (2) how rates for users of rail transportation have changed since 1990; (3) how railroad service quality has changed since 1990; and (4) what actions, if any, the Board and others have taken (or propose to take) to address rail rate and service quality issues. The requesters also asked us to identify difficulties and barriers for shippers, including small shippers, in obtaining relief from unreasonable rates from the Board. We addressed this latter topic and actions that the Board and others have taken to address rail rate issues in our companion report on issues associated with the Board’s rate relief process. To identify how the environment within which rail rates have been set has changed since 1990, we reviewed (1) legislation regarding the economic regulation of railroads, (2) regulations and decisions issued by ICC or the Board regarding rail rate and service issues, and (3) literature available in professional journals and trade publications. We also used reports we have issued on various aspects of the railroad industry and the Staggers Rail Act of 1980 and reviewed selected position papers prepared by railroad and shipper trade associations. To identify the economic and financial status of railroads in the 1990s, we collected information available from various AAR surveys of Class I railroads on the percent of railroad tonnage moved under contract and collected financial information from ICC’s Transport Statistics in the United States, the Board’s Statistics of Class I Freight Railroads in the United States, and AAR’s Railroad Facts. We also obtained information on the amount of intercity freight tonnage transported in the United States annually by transportation mode from Transportation In America, published by the Eno Transportation Foundation, Inc. To identify structural changes in the railroad industry since 1990, we reviewed information from AAR on Class I status, information on railroad industry combinations, and reviewed ICC’s and the Board’s decisions in selected railroad merger cases. To identify how railroad rates have changed since 1990, we obtained data from the Board’s Carload Waybill Sample for the years 1990 through 1996 (latest data available at the time of our review). The Carload Waybill Sample is a sample of railroad waybills (in general, documents prepared from bills of lading authorizing railroads to move shipments and collect freight charges) submitted by railroads annually. We used these data to obtain information on rail rates for specific commodities in specific markets by shipment size and length of haul. According to Board officials, revenues derived from the Carload Waybill Sample are not adjusted for such things as year-end rebates and refunds that may be provided by railroads to shippers that exceed certain volume commitments. Some railroad movements contained in the Carload Waybill Sample are governed by contracts between shippers and railroads. To avoid disclosure of confidential business information, the Board disguises the revenues associated with these movements prior to making this information available to the public. Using our statutory authority to obtain agency records, we obtained a version of the Carload Waybill Sample that did not disguise revenues associated with railroad movements made under contract. Therefore, the rate analysis presented in this report presents a truer picture of rail rate trends than analyses that may be based solely on publicly available information. The specific commodities selected for analysis were coal, grain (wheat and corn), chemicals (potassium and sodium compounds and plastic materials or synthetic fibers, resins, and rubber), and transportation equipment (finished motor vehicles and motor vehicle parts and accessories). These commodities represented about 45 percent of total industry revenue in 1996 and, in some cases, had a significant portion of their rail traffic transported where the ratio of revenue to variable costs equaled or exceeded 180 percent. Since much of the information contained in the Carload Waybill Sample is confidential, rail rates and other data contained in this report that were derived from this data base have been aggregated at a level sufficient to protect this confidentiality. We used rate indexes and average rates on selected corridors to measure rate changes over time. A rate index attempts to measure price changes over time by holding constant the underlying collection of items that are consumed (in the context of this report items shipped). This approach differs from comparing average rates in each year because over time higher- or lower-priced items can constitute different shares of the items consumed. Comparing average rates can confuse changes in prices with changes in the composition of the goods consumed. In the context of railroad transportation, rail rates and revenues per ton-mile are influenced, among other things, by average length of haul. Therefore, comparing average rates over time can be influenced by changes in the mix of long-haul and short-haul traffic. Our rate indexes attempted to control for the distance factor by defining the underlying traffic collection to be commodity flows occurring in 1996 between pairs of Census regions. To examine the rate trends on specific traffic corridors, we first chose a level of geographic aggregation for corridor endpoints. For grain, chemical, and transportation equipment traffic, we defined endpoints to be regional economic areas defined by the Department of Commerce’s Bureau of Economic Analysis. For coal traffic, we used economic areas to define destinations and used coal supply regions—developed by the Bureau of Mines and used by the Department of Energy—to define origins. An economic area is a collection of counties in and about a metropolitan area (or other center of economic activity); there are 172 economic areas in the United States and each of the 3,141 counties in the country is contained in an economic area. For each selected commodity and each corridor, we determined the average shipment distance over the 1990 through 1996 time period. We placed each corridor in one of three distance-related categories: 0-500 miles, 501-1,000 miles, and more than 1,000 miles. We then determined, for each selected commodity, the aggregate tonnage over the 1990 through 1996 time period and selected the top five corridors (based on tons shipped) within each distance category for further examination, including changes in revenues and variable costs per ton-mile over the time period. To assess how railroad service quality has changed since 1990, we (1) reviewed literature on how railroad service is (or can be) measured; (2) reviewed railroad and shipper statements on the quality of rail service in recent years; and (3) interviewed Class I railroads, shipper associations, and several individual shippers. To obtain a wider perspective on shippers’ views about the quality of service they have received and how it might be improved, we sent a questionnaire to members of 11 commodity associations that ship using rail in the United States and to those shippers that had filed rate complaints before the Board. The member organizations represent shippers of the four commodities that comprised the largest volume of rail shipments—coal, chemicals, plastics, and bulk grain. For coal, chemicals, and plastics, we surveyed all members of the associations, and this report provides the views of the 87 coal shippers and 99 chemicals and plastics shippers that responded to our survey. Because we used statistical sampling techniques to obtain the views of members of one grain association, the National Grain and Feed Association, the statistics we provide relating to the views of grain shippers and of all shippers responding to our survey are presented as estimates. The report provides estimates of the views of 523 grain shippers. In all cases, these estimated 709 coal, chemicals, plastics, and grain shippers indicated that they had shipped goods by rail in at least 1 year since 1990. Some estimates presented in this report do not represent the views of 709 shippers because some shippers did not answer all the questions. For more information on how we conducted our survey, as well as responses to individual questions, see our companion report on current issues associated with the Board’s rate relief process (GAO/RCED-99-46). We also determined the number of formal service complaints that were being adjudicated by ICC on January 1, 1990, and the number that have been filed with the ICC/Board from January 1, 1990, through December 31, 1998. To do this, we asked the Board to identify all formal service complaints between these two dates. In order to test the completeness of the Board’s identification of service complaints, we reviewed selected cases that the Board did not consider to be service-related. We found one service complaint not contained on the Board’s original list of complaints. We discussed this complaint with Board officials, who agreed that it should be considered a formal service complaint. We did not review the merits, or appropriateness, of any ICC/Board decisions associated with these complaints. To determine actions the Board and others have taken or have proposed to take to address service issues, we interviewed officials from the Board, DOT, and U.S. Department of Agriculture (USDA); industry association officials; and officials from Class I railroads and reviewed the documents that they provided. We also reviewed statutes and regulations pertaining to service issues, recent Board decisions on service issues, and emergency and directed service orders issued by the ICC or the Board since 1990. We interviewed officials from the Board, DOT, and USDA about their recent and planned efforts to address the needs of agricultural shippers and obtained and reviewed relevant agency agreements and reports. We interviewed Class I railroad and AAR executives about, and obtained and reviewed documentation on, their 1998 meetings with shippers; efforts to develop and disseminate measures of service; agreements with grain and feed shippers and small railroads; and efforts to improve customer service. We also attended the railroad/shipper meetings held in Chicago in August 1998 and in Atlanta in October 1998. The organizations we contacted during our review are listed in appendix III. Our work was conducted from June 1998 through March 1999 in accordance with generally accepted government auditing standards. In commenting on a draft of this report, the Board noted that our map of Class I freight railroads in the United States in 1997 (fig. 1.1) did not include trackage rights of Class I railroads over other Class I railroads, including about 4,000 miles of Burlington Northern and Santa Fe trackage rights over Union Pacific. The Board also noted that it has an informal process for handling railroad service complaints and that this process can be used to resolve service problems quickly and inexpensively. In response to these issues, we modified the note to figure 1.1 to indicate that Class I trackage rights over other Class I railroads is not shown on the map, including the 4,000 miles of Burlington Northern and Santa Fe trackage rights over Union Pacific. We also added language better recognizing the Board’s informal service complaint process. Railroads’ rate setting since 1990 has increasingly been influenced by ongoing industry and economic changes such as continued rail industry consolidation, which has concentrated the industry into fewer and bigger railroads, and the need for investment capital to address infrastructure constraints. Rail rates are also a function of market competition. Using differential pricing, railroads continued to set rates in the 1990s according to the demand for their services. Overall railroad financial health has improved during the 1990s, and railroads increased their share of the freight transportation market. However, many Class I railroads continued to earn less than what it costs them to raise capital (called the revenue adequacy standard). Ongoing industry and economic changes have influenced how railroads have set their rates. Since 1990, there has been considerable change in the rail industry and the economic environment in which it operates. Not only has the rail industry continued to consolidate, potentially increasing market control by the largest firms, but capacity constraints have led to an increased need for capital; industry growth has raised the specter that productivity gains may moderate; and domestic and worldwide economic changes have caused fluctuations in the demand for rail transportation. Many of these changes are expected to continue into the future. Other actions are also expected to influence the rate-setting environment, including ongoing actions to deregulate the electricity generating industry. The 1990s have seen significant consolidation within the railroad industry. For the most part, this consolidation has concentrated the rail industry in fewer and larger companies and potentially increased market control by these firms. The number of independent Class I railroad systems has decreased from 13 in 1990 to 9 in early 1999. These firms control a significant portion of industry revenues as well as traffic. In 1990, the five largest railroads accounted for about 74 percent of total rail industry operating revenue. In 1997, this percentage had increased to about 94 percent. In fact, the two largest Class I railroads (Union Pacific and Burlington Northern and Santa Fe Railway) accounted for about 55 percent of total industry operating revenue. An analysis of ton-miles of revenue freight transported shows similar results. In 1990, the five largest railroads accounted for about three-fourths of total revenue ton-miles transported by the railroad industry. In 1997, the five largest railroads accounted for about 95 percent of revenue ton-miles transported. Again, the two largest Class I railroads accounted for just under two-thirds of all revenue ton-miles transported in 1997. Some shipper groups and others have expressed concerns about industry consolidation. For example, the Railroad-Shipper Transportation Advisory Council, created by the ICC Termination Act, reported in 1998 that, because of rail industry consolidation, some shippers have developed fears that the railroad that serves them not only dictates the terms of their relationship but also whether they remain economically viable. The Consumers United For Rail Equity, representing various shipper and industry trade associations, has also expressed concerns that dwindling competitive rail options resulting from industry consolidation have increased the number of shippers that consider themselves captive to railroads. Finally, the Alliance for Rail Competition, also representing various shipper and industry trade associations, has expressed concern that deteriorating rail service and the potential for monopoly rate abuse by railroads have resulted from the creation of fewer and bigger railroads. This organization believes increased competition in the railroad industry, rather than regulation, would better protect shippers against abuses. The Board plays a role in rail industry consolidation. Not only does the Board approve proposed mergers and acquisitions when it finds them in the public interest, but monitors them once they have been approved. As part of the review and approval process, the Board has the authority to attach conditions to a merger or acquisition. In general, these conditions are designed to protect the public against any harm that might otherwise be experienced as the result of one railroad taking over another and to protect against the potential loss of competition or protect affected shippers from the loss by another rail carrier of the ability to provide essential service. According to the Board, merger conditions are routinely imposed to ensure that any shipper that was capable of being served by more than one railroad before a merger will continue to have more than one railroad available after the merger. These conditions typically involve granting another railroad either rights to operate on the combining railroads’ track or some form of switching rights to gain access to affected customers of the combining railroads. These conditions have been imposed in all large mergers occurring during the 1990s. Board officials have acknowledged, however, that due to staff and resource limitations they must by necessity be less proactive in monitoring mergers to ensure that conditions imposed are working properly to preserve pre-merger competition. The rate-setting environment has also been increasingly affected by railroads’ infrastructure needs. Railroads have increased their market share and the amount of tonnage they carry each year. However, even with the increased demand for rail transportation, real rail rates have declined, necessitating that railroads seek ways to continue to reduce costs. Two ways such costs have been cut are reductions in miles of road operated and employment levels. (See figs. 2.1 and 2.2.) From 1990 to 1997, the miles of road operated by Class I railroads decreased about 15 percent (from about 119,800 miles to about 102,000 miles), and Class I employment decreased by about 18 percent (from 216,000 employees to 178,000 employees). Although reductions in miles of road operated and employment have helped to reduce costs, they have also created capacity constraints and a need for investment capital to address these constraints as the rail market has grown in recent years. Obtaining this capital has become a concern of the rail industry, particularly given falling rates and revenue trends. Some of the railroad officials we spoke with acknowledged this concern and were unsure about how this problem would be addressed. For example, officials of one Class I railroad told us that, in the future, their company would have a difficult time meeting increased market demand because of a lack of equipment and inadequate track and rail facility infrastructure. The officials suggested that additional capital investment would be needed to address choke points—that is, sections of track and facilities that have more traffic than they can handle. However, making such investments would be difficult given falling rail rates. Officials at two other Class I railroads also expressed concern about market growth and capacity constraints and said that additional investment would be needed. The officials also agreed that this would be difficult, at best, given rail rate trends and the need to price their services to be competitive. The rate-setting environment has also been influenced by productivity gains. In particular, productivity gains have helped railroads reduce costs, which in turn has allowed railroads to reduce rates in order to be competitive. The productivity gains achieved in the 1980s have largely continued into the 1990s. (See fig. 2.3.) We looked at three measures of productivity—net ton-miles per train hour, revenue ton-miles per gallon of fuel consumed, and revenue ton-miles per employee-hour worked. In general, each of these measures, except net ton-miles per train-hour, increased since 1990. Net ton-miles per train-hour has fluctuated since 1990, and in 1996, was about 2 percent lower than it was in 1990. Revenue ton-miles per employee-hour worked, in particular, has shown dramatic increases since the late 1980s. Using an index based on 1980 (1980 equals 100), revenue ton-miles per employee-hour worked more than doubled from 1986 through 1996—rising from an index value of 151 to an index value of 344. According to railroad officials, most of the productivity gains achieved have been shared with customers through rate reductions. Although productivity gains have played a significant role in past rate making, there is some question as to whether these gains can continue to be achieved. One recent study suggests that the prospects for continued productivity improvements may be diminishing. This was attributed to the expectation that, because industry consolidation has permitted significant reduction in miles of road operated and employment levels, the next round of industry consolidation and mergers (network rationalization) might yield only modest productivity benefits. If so, then there may be fewer opportunities for the rail industry to rely on productivity gains to achieve cost reductions and therefore rate reductions. In fact, future productivity gains may be reduced because what was once redundant track and facilities (and therefore eliminated to reduce costs) might have to be brought back into service to meet market growth. Doing so could minimize productivity improvement. The rate-setting environment has been affected by domestic and world economic changes. This is especially true for rail commodities that are exported. For railroads, volatility in world grain markets can affect the volume of grain transported by rail. Over the last 10 years, the volume of export grain transported by rail has ranged from a low of about 28 million tons in 1994 to a high of about 56 million tons in 1988. Other rail commodities can also show fluctuations over time. From 1992 through 1996, the nation’s coal exports ranged from a low of about 71 million tons in 1994 to a high of about 103 million tons in 1992. The volatility in commodity markets can affect railroad rates because it affects the demand for rail transportation. As demand changes, railroads adjust rates to attract or retain business. For example, officials at one Class I railroad told us that it has a wide range of pricing policies for chemicals that allow it to react to changes in world chemicals markets. Officials from the same railroad said that export demand can play a particularly strong role for grain. Although grain rates can be affected by decreases in demand, there is more of an impact when exports are strong and their railroad is trying to keep business away from a competitor. The rate-setting environment has also been affected by legislative and/or regulatory actions. In 1990, the Clean Air Act was amended to, among other things, reduce sulfur dioxide emissions by electric generating plants. The act spurred the demand for low sulfur coal for use in generating electricity. This increased the demand for western coal, especially from the Powder River Basin area of Wyoming and Montana, which is known for its low sulfur content. In 1996, Wyoming produced more coal than any other state in the nation (about 278 million tons or about 63 percent more than the next highest state, West Virginia). About 85 percent of this coal moved by rail. Although demand for Powder River Basin coal has increased substantially, our analysis shows that inflation-adjusted Powder River Basin rail rates on both long (over 1,000 miles) and medium distance (over 500 miles) routes have generally decreased since 1990. Ongoing efforts to deregulate the electricity generation industry can be expected to affect future rail rates. Electricity generation is heavily dependent on coal as a fuel source. A recent Energy Information Administration study found that over 87 percent of all coal consumed in the United States was for electricity generation by utilities. Moreover, railroads are the largest carrier of coal, and transportation is a major component of the price of coal delivered to electric power generators. The study suggested that as the electricity generating industry becomes more competitive there will be pressure for the industry to reduce its costs, including the price it pays for coal and the transportation of coal. These cost reductions may have significant impacts on the railroad industry and future rail rates. In reducing the economic regulation of railroads through the 4R Act and Staggers Rail Act, the Congress expected that rates determined by market competition would, in general, benefit both railroads and shippers. In many instances, railroads faced competition from other railroads or modes of transportation, and the new congressionally set rail transportation policy recognized the broader nature of this competition by permitting railroads the flexibility to set their rates in response to rates and services available to shippers from other transportation options. In particular, railroad rates set in response to truck, barge, or railroad competition would typically be different (lower) than rates based primarily on a railroad’s full cost to provide service. Differential pricing then is a means by which railroads set rates reflecting the demand characteristics of shippers, with the result that shippers with similar cost characteristics (such as the number of railcars to be shipped or lengths of haul to destination) can pay quite different rates. Although rail rates set using demand-based differential pricing reflect the demand characteristics of shippers and market competition, such rates are also linked to railroad costs. Generally, the nature of a railroad’s fixed costs (e.g., physical plant such as rail, bridges, and signalling) is such that the costs of providing it are (1) incurred before any traffic moves and (2) insensitive to the level of rail traffic. Fixed costs are also largely unattributable to any particular shipper. For a railroad to be profitable, it must recover all of its costs—fixed as well as variable costs. Differential pricing is a pricing mechanism in which a railroad’s fixed costs can be recovered collectively from all shippers but not necessarily proportionately from each shipper. Under differential pricing, shippers without effective alternatives to a railroad’s transportation generally pay proportionately greater shares of the railroad’s fixed costs, while shippers with more alternatives pay proportionately less. Differential pricing was envisioned as benefitting both railroads and shippers. Railroads were expected to benefit from gaining the pricing flexibility to retain or attract shippers that would otherwise choose other transportation modes. In this way, railroads were expected to benefit from a larger and more diversified traffic base than under the previous regulatory scheme. Those shippers with competitive alternatives were expected to benefit from lower rail rates. Shippers without competitive alternatives were also expected to benefit. In theory, these shippers would pay less than if competitive traffic were diverted to an alternative transportation mode, thus leaving those shippers without alternatives to bear the unattributable costs previously assigned to the diverted traffic. The Congress expected that the transition to differential pricing and a more market-oriented system would not affect all shippers equally because, in general, transportation characteristics and market conditions vary among commodities. In practice, these expectations have been met. Data from the Board show that in 1990 about one-third of all rail traffic (as measured by revenues) was transported at rates generating revenues exceeding 180 percent of variable costs. By 1996, this percentage had decreased to 29 percent. That means that about 70 percent was transported at rates generating revenues that were less than 180 percent of variable costs. In addition, in 1996, the percent of commodity revenue for shipments transported at rates generating revenues exceeding 180 percent of variable costs fluctuated widely by commodity—ranging from a low of near 0 percent for fresh fish and tobacco products to a high of about 73 percent for crude petroleum and gasoline. Among the commodities included in our analysis of rail rates (coal, grain, chemicals, and transportation equipment), the percent of commodity revenue for shipments transported at rates generating revenues exceeding 180 percent of variable costs ranged from about 23 percent for farm products (grain) to about 54 percent for chemicals. One important factor that has played a role in how railroads set their rates has been the financial health of the railroad industry. During the 1990s, railroad financial health generally improved compared with the 1980s. Not only were returns on investment and equity higher, but railroads were able to increase their market share. However, most railroads have been determined by the Board to be “revenue inadequate”—that is, their earnings were less than the railroad industry’s cost of capital. Revenue adequacy determinations have been controversial, and some shippers have questioned the meaningfulness of the current method of determining revenue adequacy. Not being able to earn the cost of capital can affect a railroad’s ability to attract and/or retain capital and remain financially viable. In general, railroad financial health improved in the 1990s. For example, railroad returns on investment and returns on equity—both measures of profitability —were higher during the 1990s than they were in the 1980s. From 1990 through 1997, returns on investment averaged 8.5 percent per year while returns on equity averaged 10.7 percent per year. (See fig. 2.4.) This was about 61 percent and 24 percent greater, respectively, than the 5.3 percent and 8.7 percent returns on investment and equity achieved during the 1980s. The operating ratio, which shows how much of a railroad’s operating revenues are taken up by operating expenses, also showed improvement. From 1990 through 1997, railroad operating expenses accounted for, on average, about 87 percent of operating revenues annually—about 1 percentage point less than the average from 1980 through 1988. According to a Board official, every 1-percentage point change in the operating ratio can be significant to the railroad industry. However, not all aspects of financial health improved. For example, railroads’ ability to meet their short-term and long-term obligations were either about the same as, or worse than, during the 1980s. The current ratio, which compares the dollar value of current assets (such as cash) to the dollar value of current liabilities (such as short-term debt), averaged about 64 percent from 1990 through 1997. (See fig. 2.5.) In contrast, this ratio averaged about 113 percent from 1980 through 1988. Maintaining a current ratio of less than 100 percent may jeopardize a firm’s ability to pay its short-term debts when they come due. A firm’s ability to pay its long-term debt is generally measured by the fixed charge coverage ratio, which compares the income available to pay fixed charges with the interest expense that must be paid on debt outstanding. Since 1990, the fixed charge coverage ratio for the railroad industry was only marginally better than it was during the 1980s. From 1990 through 1997, the fixed charge coverage ratio averaged about 4.7—that is, the income available to pay fixed charges was about 4.7 times the interest to be paid. From 1980 though 1988, the ratio averaged about 4.6. Railroads have also increased their market share during the 1990s. (See fig. 2.6.) In 1990, railroads transported almost 38 percent of intercity revenue freight ton-miles. By 1997, the market share had increased to 39 percent. This increase came despite a general slowdown in the growth of intercity freight traffic handled by railroads in this decade. From 1990 through 1997, the amount of intercity freight tonnage handled by railroads grew, on average, about 2 percent annually. This compares with about a 3-percent average annual growth in the 1982 through 1989 period. The market share change may be a reflection of railroads’ increased use of contracts to tailor their rates and service to meet customer needs. According to AAR, in 1997 about 70 percent of all railroad tonnage moved under contract—up 10 percentage points from 1988. However, contracts are more prevalent for the shipment of some commodities than others. AAR statistics show that, in 1997, over 90 percent of all coal tonnage, but only about 26 percent of grain tonnage, moved under contract. In fact, the percentage of grain tonnage moved under contract has decreased over time. In 1994, about 50 percent of grain tonnage moved under contract compared with 26 percent in 1997. According to an AAR official, this decrease was primarily attributable to (1) an increased use by railroads of noncontract car reservation/guarantee programs to supply grain cars to shippers and (2) a 1988 regulatory change that increased the amount of public information about grain contracts. Under car reservation/guarantee programs, for a fee, shippers can obtain a set number of railcars for delivery at a future date(s). Although railroad financial health has improved, most Class I railroads are still not earning revenues adequate to meet the industry cost of capital. From 1990 through 1997, in any one year no more than three of nine Class I railroads were determined by the ICC/Board to be revenue adequate. From 1990 through 1994, in any one year no more than 2 of 12 Class I railroads were determined to be revenue adequate. The returns on investment of the remaining railroads have been below the railroad industry’s cost of capital. The degree that Class I railroads did not earn the industry’s cost of capital has fluctuated since 1990. (See table 2.1.) This appears to reflect fluctuations in average return on investment more than a change in the cost of capital. The cost of capital has generally remained between 11.4 percent and 12.2 percent from 1990 through 1997. In contrast, return on investment has ranged from just over 1 percent to just under 9.5 percent. As we reported in 1990, revenue inadequacy affects the ability of a railroad to attract and/or retain capital. Insufficient profit not only makes it difficult for railroads to cover costs, maintain operations, and remain financially viable, but may also induce investors to place their funds elsewhere. Revenue adequacy determinations for the railroad industry have been controversial. According to Board officials, controversy over revenue adequacy determinations is not new and that these issues have been addressed at length by the Board’s predecessor. However, in recent years, shippers and others have again questioned the meaningfulness of the current method of determining revenue adequacy, particularly railroads’ ability to attract capital for mergers and acquisitions. For example, in 1996, Union Pacific was expected to spend about $1.6 billion to acquire Southern Pacific Railroad. Nevertheless, in this same year, the Board determined Union Pacific to be revenue-inadequate. Similarly, in 1998, CSX Transportation estimated that it would incur over $4 billion in acquisition costs in the joint CSX Transportation/Norfolk Southern acquisition of Conrail. In 1997, CSX Transportation was determined by the Board to be revenue-inadequate. In April 1998, the Board began a proceeding to address issues related to railroad access and competition. As part of this proceeding, the Board called upon both railroads and shippers to mutually agree on an independent panel of disinterested experts to review how revenue adequacy is determined and to develop recommendations as to how, if at all, this determination should be changed. According to the Board, as of February 1999, although railroad representatives were satisfied with the neutral panel approach, shipper representatives opposed it and suggested instead that the Board initiate a rulemaking proceeding to address revenue adequacy issues. In commenting on a draft of this report, Board officials said that we should better explain that the Board, in its merger decisions, has taken actions to ensure that no shipper has become captive to a single railroad. The Board also said we should better recognize that controversy over revenue adequacy determinations is not new and that these issues have been addressed at length by the Board’s predecessor. To address these concerns, we have modified the report to acknowledge that the Board imposes merger conditions to ensure that any shipper that was capable of being served by more than one railroad before a merger would continue to have more than one railroad available after the merger. We also added language to better recognize that revenue adequacy determinations have been controversial for some time and that these issues had been dealt with by the Board’s predecessor. Since 1990, railroad rates have generally fallen both overall as well as for specific commodities. However, rail rates have not decreased proportionately for all shippers and users of rail transportation. Some shippers, like those transporting coal, have experienced larger rate decreases than other shippers. In addition, in other cases, such as long-distance wheat shipments from Montana and North Dakota to west coast destinations for export, real rail rates have stayed about the same as, or were slightly higher than, they were in 1990. We also found that revenues were 180 percent or more of variable costs for a number of routes, including short-distance movements of coal and long-distance movements of wheat from northern plains states such as Montana and North Dakota. The degree of competition on a route may have played a role in both how rates changed and/or how high or low a revenue to variable cost ratio may be for a specific commodity or route. While the revenue to variable cost ratio is often used as a proxy for market dominance, use of the ratio for this purpose may lead to misinterpretations. For example, even when railroads pass all cost reductions along to shippers in terms of reduced rates, the ratio can increase. Conversely, the ratio can decrease if railroads pass all cost increases along to shippers in the form of higher rates. In general, real (inflation-adjusted) rail rates have decreased since 1990. In fact, real rail rates have been falling since the early 1980s. In February 1998, the Board found that the average, inflation-adjusted Class I railroad rate had decreased by about 46 percent from 1982 through 1996.The Board found that rates in all major commodity groups decreased, including coal and farm products, which, as bulk commodities, have historically been shipped by rail. However, the decreases were not uniform. (See table 3.1.) Also, in general, the average annual rate of decrease in rail rates was somewhat lower in the 1990s (about 4 percent annually) compared with what it was from 1982 through 1989 (4.6 percent annually). The average annual rate of decrease in rail rates for farm products (which include grains such as corn and wheat) was about 7 percent in the 1980s, compared with only about 1 percent in the 1990s. In contrast, the average annual rate of decrease for coal was just over 3 percent in the 1980s, compared with almost 8 percent in the 1990s. Our analysis of overall real rail rates showed similar results, with certain exceptions. Using the Board’s Carload Waybill Sample—a data base of actual rail rates provided to the Board annually by individual railroads—we constructed rate indexes for coal, grain, certain chemicals, and transportation equipment for the period from 1990 through 1996. (See fig. 3.1.) As the figure illustrates, in general, rail rates for most of these commodities decreased over time. The exceptions were wheat, corn, and chemicals (potassium and sodium; plastics and resins). Wheat in particular showed general rate increases from 1992 through 1994—from about 2.1 cents per ton-mile to about 2.5 cents per ton-mile—before falling back to about 2.4 cents per ton-mile in 1996. Corn also showed increases from 1990 through 1995—from about 1.8 cents per ton-mile to just under 2.1 cents per ton-mile—before decreasing in 1996 to about 1.9 cents per ton-mile. There may be a variety of reasons behind the rate changes shown in figure 3.1. As we reported in 1990, to become more competitive railroads reduced rates. In addition, railroads have made extensive use of contracts to do business. Finally, rail rates reflect the specific characteristics of each commodity and the demand for rail transportation. According to USDA, transportation of wheat is dominated by railroads—in 1996 railroads transported about 57 percent of all wheat in the nation—and exports greatly affect the demand for rail transportation. Since 1990, the demand for rail transportation of wheat for export has fluctuated from a high of about 25 million tons in 1993 to a low of about 15 million tons in 1994. (See fig. 3.2.) In contrast, transportation of corn is more dependent on trucks—in 1996, trucks transported about 41 percent of corn production compared with about 38 percent for rail—and corn is primarily used for domestic poultry and cattle feed, domestic processing into ethanol alcohol, and other purposes. Also, significant amounts of corn are grown in areas accessible to navigable waterways, and much of the corn exported is transported by barge to such ports as New Orleans. As shown in figure 3.2, since 1990 the rail transportation of domestic corn has fluctuated from about 58 million tons in 1995 to about 45 million tons in 1991. These commodity characteristics may at least partially account for the overall difference in prices between wheat and corn—2 to 2.5 cents per ton-mile for wheat and less than 2 cents per ton-mile for corn. Our analysis of rail rates since 1990 for coal, grain (corn and wheat), chemicals, and transportation equipment in selected transportation markets/corridors generally showed that real rail rates have fallen.However, not all rates have fallen, and rail rates were sensitive to competition—both intermodal (competition between railroads, trucks, and other transportation modes) and intramodal (rail to rail). For example, we found that real rail rates for corn shipments from the Midwest, where there is barge competition, to the Gulf Coast were significantly less than rail rates for corn shipments on similar distance routes that appeared to offer little nonrailroad competition. We also found that rates in markets/corridors that are considered to have less railroad-to-railroad competition, such as the plains states of North Dakota and Montana, were generally higher than rail rates on similar distance corridors that might offer more railroad options. Finally, we found that the relationship of shipment size (number of railcars) to rates varied by commodity. Typically, as shipment size increases, rates charged per ton decrease, reflecting increased efficiencies in train operations. For coal and some other commodities we reviewed, we generally found that the size of shipments remained relatively constant from 1990 through 1996. However, at the same time rates were generally falling. This implies that factors other than shipment size accounted for the rate decreases. We also found that on at least one northern plains wheat corridor we reviewed, railroad rates generally did not decrease even as average shipment size increased. In general, real rail rates for coal shipments have fallen since 1990. This was true for overall rates and for the specific long-, medium-, and short-distance transportation corridors/markets. The rates on medium-distance routes (between 501 and 1,000 miles) provide a good illustration of the changes we found in coal rates. (See fig. 3.3.) As figure 3.3 shows, real rail rates for both the eastern (Central Appalachia) and western (Powder River Basin) coal routes that we looked at generally decreased since 1990. On the eastern medium-distance coal routes, rates generally decreased one-half to 1 cent per ton-mile. On the western medium-distance coal routes, rates generally decreased between two-thirds of a cent and one cent per ton-mile. The only real exception to the rate decreases was a slight increase in real rail rates from 1994 through 1996 on a route from Central Appalachia to Orlando. However, the rate in 1996 was still about seven-tenths of a cent less than the rate in 1990. There may be a number of reasons why rail rates for the transportation of coal have fallen. Although changes in shipment size may affect rail rates, in general we did not find any significant changes in shipment sizes from the 1990 through 1996 period for the routes/corridors we reviewed. On the medium-distance routes, shipment size for the eastern coal routes generally remained between 80 and 90 railcars over the entire period, except for the Central Appalachia to Norfolk, Virginia, route where shipment size generally stayed between 40 and 50 railcars. Shipment size on the medium-distance western coal routes generally remained between 100 and 115 railcars. Shipment size on western long-distance routes (over 1,000 miles) also generally remained in the 100 to 120 railcar range, while shipment size on the shorter distance coal routes (500 miles or less) generally remained in the 70 to 90 car range. One exception was a short-distance route between Central Appalachia and Charleston, West Virginia. On this route, the average shipment size increased from about 70 railcars in 1990 to about 100 cars in 1996. Over the same time period, the rail rate decreased about 30 percent—from about 6.5 cents per ton-mile in 1990 to about 4.5 cents per ton-mile in 1996. The coal rates we examined may have been affected by rail competition. Currently, two Class I railroads serve the Powder River Basin—the Burlington Northern and Santa Fe Railway and Union Pacific Railroad—and three Class I’s serve the Central Appalachia region—Conrail, CSX Transportation, and Norfolk Southern. Whether these or other railroads have the market power to extract higher rates from coal shippers is unclear. On the one hand, data from the Board show that from 1990 through 1996 the percent of coal shipments transported where revenues exceeded 180 percent of variable costs averaged about 53 percent. However, in 1996, 47 percent of the coal shipments were transported at rates where revenue exceeded 180 percent of variable costs. This was the lowest percentage since 1987. On the other hand, if the number of rate complaints filed with ICC or the Board is indicative of shippers’ views of market power wielded by railroads, about half of the approximately 40 rate complaints filed since January 1, 1990 (or were pending on that date), involved coal rates. As discussed earlier, rail rates for transporting grain such as wheat and corn have generally stayed the same or increased since 1990. However, rail rates for medium-distance routes (501 to 1,000 miles), such as from central plains origins around Oklahoma City and Wichita to Houston, showed some decreases. (See fig. 3.4.) On the other hand, rail rates from Great Falls, Montana, to Portland, Oregon, stayed about the same or increased slightly between 1990 and 1996. We found similar trends in other distance categories, particularly long-distance (greater than 1,000 miles) wheat routes. The rail rates on long-distance wheat routes from Billings, Montana, and Minot, North Dakota, to Portland both stayed relatively constant, at about 3 cents per ton-mile over the entire 7-year period. Rate trends for corn shipments were similar to those of wheat. Again, the variety of rate trends we found for shipments of corn can be seen on the rates for medium-distance routes. (See fig. 3.5.) Although the rates on some of the routes, most notably those routes from the midwest to Atlanta, showed decreases, rates for corn shipments from selected origins in Illinois to New Orleans showed some increases. As with wheat, rail rates for long-distance corn shipments on the routes we reviewed generally varied little, remaining in the 1.4 to 1.6 cents per ton-mile range from 1990 through 1996. We also found that rail rates for wheat and corn shipments appeared to be sensitive to both inter- and intramodal competition. For example, as shown in figure 3.4, rail rates for shipments of wheat from Duluth, Minnesota, to Chicago, Illinois—a route that is potentially competitive with Great Lakes water transportation—were significantly less—generally between 0.75 to almost 2 cents less per ton-mile—than rail rates on other medium-distance wheat routes. This includes rail rates for shipments from Great Falls, Montana, to Portland, Oregon, which some consider to lack effective transportation alternatives to rail. The same was true for corn shipments. The rail rates for corn shipments from Chicago and Champaign, Illinois, to New Orleans—routes which are barge competitive—were substantially less (in some years over 2 cents per ton-mile less) than rail rates on the other medium distance corn routes. (See fig. 3.5.) The sensitivity to intramodal competition is best seen by comparing rail rates for wheat shipments originating in the central plains states with the rail rates for shipments originating in the northern plains states. As figure 3.4 illustrates, rail rates for wheat shipments originating in Oklahoma City and Wichita were generally about 1 cent per ton-mile less than rates on the Great Falls, Montana, to Portland, Oregon, route which originated in the northern plains. Northern plains states, such as Montana and North Dakota, generally have fewer Class I railroad alternatives than the central plains states, such as Kansas. (See fig. 1.1.) Shipment size is an important factor influencing railroad costs and hence rates, particularly for agricultural commodities. Loading more cars at one time increases railroad efficiency and reduces a railroad’s costs. We found that the average shipment size of wheat originating in the northern plains was typically smaller than for wheat shipments originating in the central plains. For example, average shipment size on the Great Falls, Montana, to Portland, Oregon, route was about half that of shipments going from Wichita to Houston—about 40 railcars from Great Falls compared with about 70 railcars from Wichita. (See fig. 3.6.) This may partially explain why rail rates and costs for wheat shipments are higher in the northern plains than in the central and southern plains. To investigate further the effects of shipment size on railroad rates and variable costs, we developed regression equations using waybill data in which annual average revenues per ton-mile and average variable costs per ton-mile were calculated for export wheat corridors and shipment size categories, and then regressed on distance, a time trend, and indicators of the shipment size category. For a set of northern plains export corridors, the effects of increased shipment size on revenues were modest compared with the effects of shipment size on variable costs per ton-mile on these routes, and compared with the effects of shipment size on both revenues and variable costs for a set of central and southern plains export corridors. Specifically, revenues per ton-mile for the northern plains corridors were estimated to be 0.2 of a cent less on shipments between 5 and 50 cars than for shipments of fewer than 5 cars, while revenues per ton-mile for the central and southern plains corridors were estimated to be 0.6 of a cent less for a similar shipment size increase. Additionally, revenue per ton-mile in the central and southern plains for shipments exceeding 50 cars were estimated to decrease an additional 0.3 of a cent, while in the northern plains, the estimated reduction in revenue per ton-mile for this increase in shipment size was not statistically significant. For variable costs per ton-mile, there was more similarity between northern plains and central and southern plains states. For example, estimated cost reductions were statistically significant for all shipment size categories, although the magnitudes were greater in the central and southern plains case. For comparison purposes, we also reviewed rail rates for certain chemicals and transportation equipment. In general, we found that real rail rates for chemical shipments exhibited many of the characteristics of coal and grain discussed previously—that is, many of the rail rates on various routes fell, but rates did not fall on all routes. An illustration of these trends can be seen for shipments of potassium/sodium on medium distance routes. (See fig. 3.7.) As figure 3.7 shows, rail rates from Canadian origins to Minneapolis, Minnesota, decreased about one-third over the 7-year period—from about 5.4 cents per ton-mile to about 3.7 cents per ton-mile. However, rates from Casper, Wyoming, to Portland, Oregon, remained relatively stable at 3.4 cents per ton-mile. One of the largest rate changes was a decrease in rail rates for transportation of plastics and resins within the New Orleans, Louisiana, economic area (a short-distance route). On this route, rail rates decreased about 70 percent from 1990 through 1996—from about 47 cents per ton-mile to about 14 cents per ton-mile. (See app. II.) According to the Chemical Manufacturers Association, nearly two-thirds of the tonnage of chemicals and allied products shipped are transported less than 250 miles. At these distances, trucks are a competitive option for chemical shippers, and in 1996, about 52 percent of the tonnage of all chemicals and allied products shipped were by truck, with railroads only accounting for 21 percent. Rail rates for shipments of finished motor vehicles and motor vehicle parts and accessories also showed a variety of patterns. One of the most dramatic rate changes was a decrease in rail rates for the transportation of finished motor vehicles from Ontario, Canada, to Chicago, Illinois. On this route, rates fell about 40 percent—from 19.5 cents per ton-mile to 11.7 cents per ton-mile. In general, most rail traffic in motor vehicles and motor vehicle parts or accessories is under contract or has been exempt from economic regulation. According to AAR surveys, the percent of motor vehicle traffic that moved under contract increased from 55 percent in 1994 to 81 percent in 1997. Whether railroads have the market power to charge high rates is unclear. Officials from Norfolk Southern told us that automotive shippers “pay a premium rate for premium service.” This suggests that rates may be related to factors other than market power. In addition, officials from Union Pacific said their company has offered shippers reduced rates in return for guaranteed high volumes of shipments, again suggesting that rates are related to factors other than market power. Revenue to variable cost ratios are often used as indicators of shipper captivity to railroads. If used in this way, the higher the R/VC ratio the more likely it is that the shipper has used only rail to meet its transportation needs and the more likely it is that the railroad can use its market power to set rates that extract revenues much greater than its variable costs. Since 1990, about one-third of all railroad revenue has come from shipments transported at rates that generate revenues exceeding 180 percent of variable costs. However, the percentage varies by commodity and has changed over time. Our analysis suggests that competition can influence specific R/VC ratios for specific routes and commodities. In general, we found that R/VC ratios exceeded 180 percent on short-distance movements of coal and long-distance movements of wheat from northern plains states—movements where there may be less competition for the railroad. In contrast, R/VC ratios were consistently 180 percent or less on a wide variety of routes, including long-distance movements of coal. While R/VC ratios are often used as proxies for market dominance, use of such ratios for this purpose may lead to misinterpretations because R/VC ratios can increase as rail rates go down and, conversely, can decrease as rail rates go up. Overall, the percent of railroad revenue from shipments transported at rates generating revenues exceeding 180 percent of variable costs differs by commodity. (See table 3.2.) As table 3.2 shows, from 1990 through 1996, for all commodities, about one-third of all revenues generated by railroads came from movements transported at rates generating revenues exceeding 180 percent of variable costs. However, several commodities, such as coal, chemicals, and transportation equipment, had higher percentages of revenue from shipments at rates generating revenues exceeding 180 percent of variable costs. Farm products (which include grain shipments) had a lesser percentage. As table 3.2 shows, these percentages can change over time. For example, for coal and transportation equipment, in 1996, the percentage of revenue generated from shipments at rates generating revenues exceeding 180 percent of variable costs were the lowest they had been since 1990. By contrast, for chemicals, in 1996, the percentage of revenue generated from shipments at rates generating revenues exceeding 180 percent of variable costs was the highest it had been since 1990. We found a wide variety of R/VC results for the specific commodities and routes that we looked at. In general, R/VC ratios were consistently above 180 percent on short-distance movements of coal (such as from Central Appalachia) and certain long-distance movements of wheat. The R/VC ratios were consistently below 180 percent on long-distance movements of corn and of coal from the Powder River Basin and on medium-distance movements of corn and wheat. The ratios for the other commodities and routes that we reviewed showed no consistent pattern. The ratio results suggest that demand-based differential pricing may have played a role in how railroads set their rates. The fact that R/VC ratios were typically higher for short-distance movements of coal than for medium- and long-distance movements reflects the possibility that, as shipping distance increases, the shipper or receiver is better able to substitute other sources of coal. This same distance-related pattern of R/VC ratios was found for corn, illustrating both the nature of domestic corn markets as well as geographic considerations that favor barge options for the transportation of corn. In both the coal and corn cases, various competitive pressures may constrain the rates that railroads were able to charge for longer-distance movements, and this resulted in lower R/VC ratios. Long-distance movements of wheat often occurred at much higher R/VC ratios than were typically found for corn and coal. For example, the R/VC ratios for long-distance wheat movements originating in Montana and North Dakota were consistently at 180 percent or higher from 1990 through 1996. In contrast, the R/VC ratios on a Minneapolis, Minnesota, to New Orleans, Louisiana, route—where barges offer competition—were always below 100 percent. We also found differences in the ratio between northern and central plains routes for the medium-distance shipments of wheat. (See fig. 3.8.) The northern plains states are considered by some to have fewer rail alternatives than the central plains states. As figure 3.8 shows, the R/VC ratios for those wheat shipments originating in Wichita and Oklahoma City were consistently below 180 percent from 1990 through 1996. On the other hand, the R/VC ratio for wheat shipments originating in Great Falls, Montana, were consistently above 180 percent over the entire period. R/VC ratios have their limitations. One of these is how variable costs are determined. According to the Board, variable costs are developed in accordance with the Uniform Railroad Costing System (URCS). URCS is a general purpose costing system used by the Board for jurisdictional threshold determinations and other purposes. By necessity, URCS incorporates a number of assumptions and generalizations about railroad operations to determine variable costs. Because of these assumptions and generalizations, the variable costs developed under URCS may not necessarily represent the actual costs attributable to the particular shipment involved. The revenues used to calculate R/VC ratios may also not be actual. Board officials told us that revenues shown in the Carload Waybill Sample are not adjusted for such things as the year-end rebates and refunds often provided to shippers exceeding minimum volume commitments. As a result of these limitations, it is possible that some of the R/VC ratios used in our analysis would be different if actual revenues and variable costs were known. Perhaps a more serious limitation is possible misinterpretations of R/VC ratios. Because an R/VC ratio is a simple division of revenues by variable costs, it is possible an R/VC ratio could be increasing at the same time revenues and variable costs are both decreasing. For example, if rail revenues are $2 and variable costs are $1, the R/VC ratio would be 200. However, if revenues decrease to $1.50 and variable costs decrease to $0.50, the ratio becomes 300. Under this scenario, although railroads have passed all cost reductions along to shippers in terms of lower rates, the increased R/VC ratio makes it appear as though the shipper is worse off. On the other hand, R/VC ratios could be decreasing at the same time revenues and variable costs are increasing. For example, using the example above ($2 in revenues and $1 in variable costs with a ratio of 200), if revenues increase to $2.50 and variable costs increase to $1.50, the ratio becomes 167. In commenting on a draft of this report, the Board noted that competition is better measured by the effectiveness of transportation alternatives rather than the number of competitors. In response to this issue, we modified report language to better recognize the importance of effective competition in measuring the effects of competition on rail rates. In recent years, shippers have increasingly criticized Class I railroads for providing poor service. Rail service disruptions in the western United States in the summer and fall of 1997 brought national attention to these concerns. Among the problems cited by shippers were an insufficient supply of railcars when and where needed, inconsistent pickup and delivery of cars, and longer than necessary transit times to a destination. In general, railroad officials believe the railroads provide adequate service. However, they agree that service is not what it could be and that the industry has failed to meet shipper expectations. The quality of railroad service, over time for individual rail carriers or between specific railroads, cannot be measured currently. The Board determines whether service is reasonable on a case-by-case basis. In addition, the railroad industry has been reluctant to develop specific service measures for fear they could be misinterpreted or misused by the public or might reveal business-sensitive information. In reaction to widespread criticism of rail service, however, railroads have developed four performance indicators. Although these indicators may be helpful in assessing certain aspects of service, they are more an evaluation of operating efficiency than of quality of service. In recent years, railroad shippers, shipper associations, and local communities have complained in various forums about poor railroad service. Complaints have been particularly strong from agricultural shippers and communities in the West and Midwest. Union Pacific Railroad’s merger with the Southern Pacific Railroad in 1996 and the subsequent widespread delays in delivering railcars to destinations brought national attention to the seriousness of railroad service problems. Shippers attribute many of the problems they experience to a decrease in competitive transportation options as a result of railroad mergers. In addition, some shippers believe railroads must improve the consistency of their operations and increase the number of available railcars, among other things, in order to improve service levels. Many rail shippers believe service has been poor. Events in recent years may have exacerbated the problems. For example, in the summer of 1997, during implementation of the Union Pacific/Southern Pacific merger, rail lines in the Houston/Gulf Coast area became severely congested, and freight shipments in some areas came to a complete halt. As the problem spread, many grain shippers experienced delays in railcar deliveries of 30 days or more, while some grain shippers in Texas did not receive railcars for up to 3 months. Transit times for movements of wheat from Kansas to the Gulf of Mexico in some cases exceeded 30 days—four to five times longer than normal. In late 1997, the Board determined that the service breakdown, which had a broad impact throughout the western United States, constituted an emergency and, among other things, ordered Union Pacific to temporarily release its Houston area shippers from their service contracts so that they could use other railroads serving Houston, and to cooperate with other carriers in the region that could accept Union Pacific traffic for movement, to help ease the gridlock. The lack of predictable, reliable rail service has been a common complaint among some shippers. For example, during public hearings conducted by USDA in 1997, over 400 grain shippers and rural residents from Iowa, Kansas, Minnesota, Montana, and North Dakota expressed their concerns about cars not being delivered; little, or no, notification when railcars would be delivered; little or no success in trying to reach appropriate railroad officials for information on car deliveries; and the general lack of available cars when and where needed. These same types of problems were identified by shippers and shipper associations during additional hearings in Montana and North Dakota conducted in December 1997 by a Senate Subcommittee and in April 1998 by the Board during hearings on railroad access and competition issues. Our survey responses from about 700 bulk grain, coal, chemicals, and plastics shippers conducted in the fall of 1998 also reflect concerns about railroad service. An estimated 63 percent of the shippers responding to our survey (329 of 525 shippers that answered this question) said that the overall quality of their rail service was somewhat or far worse in 1997 than it was in 1990. Chemicals and plastics shippers were among the most dissatisfied with the overall quality of their rail service—approximately 80 percent of these shippers indicated that the overall quality of rail service they received in 1997 was somewhat or much worse than in 1990. About 71 percent of coal shippers indicated that the overall service levels provided by the railroads serving them were somewhat or much worse. Finally, echoing the complaints expressed during congressional hearings, an estimated 57 percent of grain shippers responding to our survey indicated their overall quality of rail service was somewhat or much worse in 1997 than it was in 1990. On the basis of our survey results, the types of problems experienced since 1990 have varied by commodity. (See table 4.1.) About 66 percent of coal shippers responding to our survey indicated that they experienced somewhat or much worse service in terms of car cycle time—that is, the amount of time it takes to deliver a commodity to its destination and return—in 1997 compared with 1990. Chemicals and plastics shippers identified problems with the consistency of on-time delivery as most problematic; about 84 percent of the shippers responding to our survey identified this problem as worse in 1997 compared with 1990. Grain shippers identified railcar availability as their most troublesome problem. An estimated 67 percent of grain shippers indicated that railcar availability during peak periods was somewhat or much worse in 1997 than it was in 1990. Railcar availability, in general, was rated as worse by an estimated 63 percent of the grain shippers. Shippers responding to our survey also indicated that the quality of service provided by the railroads has decreased relative to the amount paid for that service. This was particularly true in 1997 compared with 1990. An estimated 43 percent of those shippers (247 of 570 shippers) indicated that the quality of service provided by railroads in 1990 was somewhat or far less relative to the amount paid in 1990. In contrast, the percent of shippers indicating that the quality of service they received from railroads in 1997 was either somewhat or far less relative to the amount paid for that service had increased to an estimated 71 percent of those responding to our survey. Coal shippers and chemicals and plastics shippers were the most dissatisfied—about 80 percent and 88 percent, respectively, were dissatisfied with the value of their service. An estimated 66 percent of grain shippers responding to our survey said the quality of rail service was somewhat or far less relative to the amount that they paid for such service in 1997. The widespread dissatisfaction with railroad service has not necessarily resulted in many formal service complaints being filed with the ICC or the Board. Only 25 formal service-related complaints were pending with the ICC as of January 1, 1990, or were subsequently filed with the ICC or the Board. These complaints involved a wide range of alleged service problems, including failure to provide a sufficient supply of railcars; late inbound and outbound deliveries; and other kinds of inconsistent service. Of the seven cases that had completed the adjudicatory process as of February 1999, five were decided in favor of railroads and two in favor of shippers. Thirteen cases did not result in a decision because ICC/the Board did not have jurisdiction over the matter or the shipper withdrew the complaint. Five formal service complaints were pending as of February 1999. Typically, no more than two or three complaints were filed each year, except in 1995, when seven complaints were filed. Most of the complaints were filed against Class I railroads (68 percent), with the rest filed against smaller railroads (32 percent). Of the Class I railroads involved in these complaints, Burlington Northern had the greatest number of complaints filed against it (six) followed by Conrail (five) and CSX Transportation (three). On a commodity basis, customers who shipped grain products represented the largest proportion of complaints (20 percent), followed by customers who shipped steel and railcars (12 percent each). Many shippers and their associations have attributed service problems, at least in part, to railroad mergers or consolidations. When asked in our survey the extent to which mergers or consolidations since 1990 (excluding the Union Pacific merger with Southern Pacific) have affected the quality of rail service they received, an estimated 50 percent of the shippers (268 of 536 shippers responding) indicated that service levels were somewhat or much worse as a result of mergers or consolidations. When asked specifically about the effects of the Union Pacific merger with Southern Pacific on service levels, an estimated 84 percent of the shippers (371 shippers) indicated that the quality of rail service they received was either somewhat or much worse since the merger. Chemicals and plastics shippers indicated they were most affected by the Union Pacific/Southern Pacific merger—about 97 percent indicated that the rail service their companies received was somewhat or much worse. Similarly, about 94 percent of the coal shippers indicated that the Union Pacific merger had resulted in worse rail service. An estimated 77 percent of the grain shippers indicated they received somewhat or much worse rail service after the merger than before the merger. Shippers have also attributed service problems to a lack of competitive alternatives to rail transportation. Some shippers who told us that historically they have only been served by a single railroad or have no access to other transportation modes maintain that the rail service they receive is poor. For example, some North Dakota grain shippers told us that they are heavily dependent upon railroads to transport their grain because shipping grain by truck (the only other major mode of freight transportation available in the state) over long distances to mills, processors, and export markets is not economically feasible. As a result of this dependence, they claim there is little incentive or reason for the one railroad that serves them to provide quality service. These shippers told us that not only have railroads become more arrogant and stopped providing good service to those shippers for which they no longer face rail competition, but also railroads have tended to serve those customers with competitive alternatives first—leaving those shippers without competitive alternatives to receive the last and worst service. Shippers responding to our survey identified several changes that they believe railroads should make to increase rail service quality. Although grain shippers cited the lack of available cars as the aspect of service that has caused them the most problems, an estimated 68 percent of the grain shippers (331 of 485 shippers responding) indicated that they would like to see the consistency of on-time delivery of cars improved. An estimated 51 percent of the grain shippers (246 of 485 shippers responding) believe the number of available cars should be increased, and an estimated 33 percent (162 of 484 shippers responding) want to see the consistency of on-time pick up of cars improved. While both coal shippers and chemicals and plastics shippers identified consistency of on-time delivery as among the three most important changes needed to improve service, they identified improving transit times as among the most important changes that should be made by the railroads—about 75 percent of the coal shippers (62 of 83 shippers responding) and about 84 percent of the chemicals and plastics shippers surveyed (81 of 97 shippers responding) expressed the need for improved transit times. In general, rail industry officials believe the service they provide to their customers is adequate. In fact, railroads have made capital expenditures in recent years to improve system capacity and service levels. However, railroad officials recognize that railcar availability and the timeliness of rail shipments, among other things, do not always meet shipper expectations. Some industry officials believe capacity constraints, industry downsizing, and an inadequate railcar supply are among the factors that have contributed to the difficulties in meeting shipper service expectations. In addition, some railroad officials agree that rail mergers and consolidations, in particular the Union Pacific merger with Southern Pacific, have exacerbated service problems. Addressing service problems can be a challenge; railroad officials told us that they often face the difficult task of balancing the service needs of customers with the financial viability of the railroads. In general, railroad officials believe that current service is adequate. This is particularly true when compared with 1990. With the exception of service problems associated with the Union Pacific/Southern Pacific service crisis, officials from the four largest Class I railroads we spoke with about service said overall service in 1997 was at least as good as it was in 1990. They provided a number of illustrations for why service was as good as or better than in 1990. For example, Norfolk Southern officials said that their railroad and other railroads have made significant investments in cars, locomotives, and people to improve service. Officials from CSX Transportation said that investments in such things as the installation of continuously welded rail throughout the network, purchase of new cars and locomotives, and the development of better information technology to respond to customer problems have all contributed to improved service. There was also general agreement that rail industry consolidation, including the Union Pacific merger with Southern Pacific, has benefitted shippers by creating more single-line service that reduces the number of trains that must handle goods enroute, thereby reducing costs and transit times. However, many railroad officials also agree that service is not what it should be and may not have met shipper expectations for various reasons. For example, some railroad officials told us that delays on rail systems have been primarily caused by capacity constraints. As railroad traffic has been growing in recent years, and as railroads have been scaling back operations in order to cut costs, system capacity has become inadequate. In addition, to cut costs, railroads have reduced employment levels. Now, given the growth in railroad traffic, railroads have had insufficient people or crews available to provide the required service. For example, train delay data we obtained from one Class I railroad indicated that both a shortage of locomotives and crews were major causes of train delays from 1992 through 1996. Finally, an inadequate supply of railcars, especially for grain shippers, has contributed to shipper dissatisfaction. As one railroad official told us, railcar availability will always be a point of contention between railroads and shippers, and some railroads are reluctant to invest in the number of cars needed to handle peak demand if those cars might sit idle for a significant portion of the year. Some rail industry officials we spoke with, including those at the Union Pacific Railroad, acknowledged that the Union Pacific merger with Southern Pacific contributed to the service crisis which began in the late summer of 1997 in and around Houston, Texas. According to Union Pacific officials, Southern Pacific had more problems than Union Pacific officials expected, especially a substantial amount of deferred track maintenance. In general, these officials said that Southern Pacific had made a lot of operating decisions based on short-term cash flow considerations rather than long-term financial health. As a result, Union Pacific’s high traffic levels and a series of external stresses overwhelmed a weak Southern Pacific infrastructure. Union Pacific officials expect that as the railroad recovers from its difficulties, service levels will return to their pre-merger levels—which in their opinion, had improved since 1990. The difficulties experienced by Union Pacific affected other railroads as well. For example, officials at Norfolk Southern told us that because Norfolk Southern receives cars from Union Pacific Railroad for shipment to ultimate destinations and sends other cars to destinations that are on Union Pacific’s tracks, the Union Pacific’s problems adversely affected Norfolk Southern’s customer commitments. Officials at Burlington Northern and Santa Fe Railway told us that it took on a significant amount of additional business during the service crisis that would usually have been carried on Union Pacific, which resulted in a trade-off: railroad officials decided it was better to serve more shippers with a lower level of service rather than a more limited number of customers at a higher level of service. Officials from CSX Transportation also said the Union Pacific/Southern Pacific failures were a “wake up call” to the railroad industry to do a better job of serving its shippers. In providing high-quality service, railroad management faces the difficult task of balancing the needs of shippers with the financial viability of the railroad. In discussing service adequacy and shipper dissatisfaction, railroad officials made clear the role financial tradeoffs play in service decisions. Officials from CSX Transportation told us that their company could hire more crews and invest in assets to address capacity problems. However, in their opinion, the competitive nature of today’s railroad business precludes these extra costs from being passed on to shippers. Officials from other railroads agreed, saying that railroads need to add capacity—which will require a significant capital investment. In considering this investment, their companies will have to weigh issues such as the potential for future traffic growth; cost of adding capacity; and effects on rates and service. Tradeoffs will also be a part of the decision making process regarding railcars. Some railroad officials noted that shippers and railroads historically have disagreed on the adequacy of the supply of railcars, but actual investment in such cars involves a tradeoff between the investment in railcars and the return on that investment. Often, the return on investment is not sufficient to justify the investment cost. Management discretion that is inherent in railroad operations can also influence the quality of rail service. The logistics of moving different kinds of freight to a myriad of markets in different geographical locations can be a difficult task. Management decision making may play a larger role than technology in influencing service levels. This was the conclusion of a 1993 study conducted by the Massachusetts Institute of Technology, Center for Transportation Studies, on freight railroad reliability. This study concluded that decisions regarding power management (availability and positioning of locomotives), train operations (which trains to run, with what cars, and at what time), and the management of railroad terminals all had important consequences on railroad reliability. Some railroad officials we spoke with agreed that management decision making plays a significant role in the quality of service. For example, officials at Norfolk Southern told us that, although it has taken actions to minimize management decisions in providing service, there is still a fairly high degree of management discretion in service decisions. Officials at CSX Transportation told us that 85 to 90 percent of service performance involves management decision making about capital expenditures and operating expenses. In their opinion, at the local level, service decisions are very much influenced by budget and financial decisions, and insufficient funding could lead to reductions in such things as train service. Currently, the overall quality of railroad service provided by railroads cannot be measured. While the legislation governing railroad service requires that railroads provide service upon reasonable request, the Board and federal courts determine what constitutes reasonable service and whether a railroad has satisfied its service obligations in the context of deciding specific complaints. Industrywide measures of rail service for the most part do not exist. In general, the very limited industrywide measures we were able to obtain suggest some improvement in these measures in recent years. However, these measures are not enough to conclude that service has improved overall. Railroad officials told us they have been reluctant to develop service measures, fearing they could be misinterpreted or misused by customers and/or the public or that they may reveal business-sensitive information. According to AAR, individual rail carriers have developed measures of service over time that, while addressing carrier and/or customer specific service performance, are not necessarily consistent or continuous measures of service either between carriers or over time for individual carriers. Railroads are required by statute to provide service upon reasonable request; furnish safe and adequate car service; and establish, observe, and enforce reasonable rules and practices on car service. The Board (and its predecessor, ICC) and federal courts determine what constitutes reasonable service and whether a railroad has satisfied its service obligations in the context of deciding specific complaints. For example, in a 1992 case, the ICC addressed the issue of railcar supply in connection with a complaint challenging the legality of Burlington Northern Railroad’s Certificate of Transportation Program. The ICC held that Burlington Northern had not violated its statutory obligations and observed that the common carrier obligation requires that a railroad maintain a fleet sufficient to meet average—not peak—demand for service. According to the ICC, a requirement for a fleet sufficient to meet peak demand would result in a wasteful surplus of equipment detracting from a railroad’s long-term financial health. Other cases have involved such matters as whether a railroad was justified in refusing a shipper’s request to restore service on an embargoed line. However, ICC and the Board’s decisions are situation-specific and do not easily lend themselves to developing a single set of measures that would allow an assessment of a railroad’s—or the industry’s—quality of service in all circumstances. For the most part, industrywide measures of service performance do not exist. For example, according to AAR, there is no standard railroad industry definition of transit time and no central clearinghouse to collect industry service performance data. As a result, the types of service measurements maintained can vary from one railroad to another. The officials told us that trying to understand and develop industrywide service measures has been an important issue in the rail industry but “the least fertile area for information.” In addition, officials said that some industrywide service data that used to be collected have been discontinued. For example, AAR used to prepare reports on car cycle times, the percent of the railcar fleet that was out-of-service, and car shortages. These reports are no longer prepared because of data quality problems. A factor complicating the collection of industrywide service measures is that individual railroads have been reluctant to make such information public. According to AAR and officials at some Class I railroads we spoke with, this reluctance is based on concerns that service information could be misinterpreted or misused by the public, customers, or others or that the information may be proprietary. For example, AAR noted that providing information such as railcar transit and cycle times can be misleading because (1) cycle times are typically increased when additional railcars are added to the fleet (because it may take longer to load and unload trains with additional cars), (2) cycle times should be compared with target performance levels or standards which reflect seasonal fluctuations, (3) an increase in long-haul business may lead to a lengthening of cycle and transit times, and (4) a railroad cannot control what happens to a car once it leaves its tracks for movement to a final destination via another railroad. Regarding the latter, AAR said meaningful data on interline traffic (traffic which interchanges from one railroad to another), which represents roughly one-third of all rail freight revenue, are generally not maintained by individual railroads and would, therefore, not be captured in measuring railroad performance. As officials from one Class I railroad told us, just getting raw service data may not indicate the root cause of problems. Despite these limitations, two measures of industrywide service offer a narrow view of how service has changed since 1990. One is cycle time for freight railcars, which shows a slight improvement. (See table 4.2.) (In general, the faster the cycle time the more readily cars are available for additional trips.) In 1990, the average cycle time for all railcars was just under 18 days. In 1995 (the last year data were available), the average cycle time was just under 17 days. However, as table 4.2 shows, cycle time can fluctuate over time and, as AAR has pointed out, cycle time may be influenced by several factors, such as change in trip length. Another measure, the number of revenue freight cars undergoing or awaiting repairs (and, therefore, not available for active revenue service), also dropped slightly since 1990. (See fig. 4.1.) In 1990, about 52,000 of 677,800 cars (about 8 percent of railcars owned) were undergoing or awaiting repairs. In 1996 (the last year data were available), about 27,000 of 576,800 cars (about 5 percent of railcars owned) were in this category. However, this measure does not shed any light on how efficiently these cars were deployed or whether an adequate supply existed. Measuring service performance of the rail industry is further complicated by the fact that individual railroads do not maintain measures of service performance that are continuous or consistent across the industry. For example, we asked for, but generally did not obtain, information from individual Class I railroads about their service performance since 1990 in the following areas: (1) average car transit time—the amount of time from the departure of a shipment from an origin to delivery to a destination; (2) average car cycle time for unit trains; (3) car availability, during both peak and nonpeak periods—this would include the identification of car surpluses and shortages at each period; (4) on-time pickup of shipments; (5) on-time delivery of shipments; and (6) train delay summaries, including causes of train delays. Although some of the railroads we contacted maintained some of this information, including on-time pick up and delivery of cars and causes of train delays, most of this information was either not available going back to 1990 or was only used for specific analyses. In general, railroad representatives told us that railroads develop and maintain their own unique set of service performance measures that are tailored to their needs and their customers’ needs. Because no two rail customers may have identical service demands, and what is acceptable service to one shipper might not be acceptable to another, most railroads have developed service measures that meet the needs of their specific customers’ situations. The type and level of service can also be commodity-specific. For example, officials from CSX Transportation told us that shippers of different types of commodities demand different levels of service. For some commodities (such as intermodal containers and auto parts), on-time pick up and delivery are very important. For other commodities (such as coal and grain), through-put (total amount of tonnage) may be more important than timeliness. Finally, officials from Norfolk Southern also pointed out that differences exist between eastern and western railroads in terms of the types of service measures a railroad might keep, because eastern railroads carry, for example, more coal and western railroads carry more grain. As a result, eastern railcar delivery delays are generally measured in hours, not days as they might be in the west. Railroad mergers have also influenced the availability and consistency of service measures. As an illustration, Burlington Northern and Santa Fe Railway officials noted that, prior to the Burlington Northern merger with Santa Fe in 1995, each railroad collected its own unique service data. Because of this, data for the pre-merger period may not be available in all cases or may be inconsistent in what it measured. In addition, officials from Union Pacific Railroad told us they had concerns about providing us with service data because the type of measures collected had changed over the last 10 years—Union Pacific Railroad today is the product of mergers of several railroads, each of which had maintained unique data systems. Union Pacific officials also noted that computer technology advances have allowed Union Pacific to generate new types of data that were previously impossible to generate and that are not comparable with any data from pre-merger periods. In part due to the widespread criticism of the industry over the quality of its service, railroads are developing industrywide performance measures. As part of its overall review of railroad access and competition issues, the Board directed railroads to establish a more formal dialogue with shippers for this purpose. In response, from August to November 1998, AAR held a series of meetings across the country between Class I railroad executives and shippers to discuss service issues. As a result of these meetings, the Class I railroads decided to make available, through the Internet, actual data (not an index) on four measures of performance directed at providing shippers and others with a means to evaluate how well traffic moves over railroad systems. These measures, which the railroads began reporting in January 1999, include (1) total railcars, by type, currently on the rail system; (2) average train speed by type of service; (3) average time railcars spend in major terminals; and (4) timeliness of bills-of-lading (a receipt listing goods shipped). These measures are updated weekly and broken out by individual railroad. According to AAR, these measures are informational in nature, but consideration is being given to establishing standards and goals in these four areas. According to AAR, it is expected that rail customers will be able to use the data to determine what is happening in terms of performance on each railroad. However, according to AAR, these measures are not uniformly calculated across the industry and may be influenced by operating differences among railroads, including traffic mix, weather conditions, and terrain. Therefore, AAR cautions that this information should not be used to compare one railroad against another. Although these measures may be helpful in assessing certain aspects of service, they are more an evaluation of railroad operating efficiency rather than of quality of service. They also may not resolve more fundamental concerns about service. For example, in a November 1998 letter to the Board, several shipper associations and shippers expressed their concern that better information alone will not solve the service problems resulting from railroad consolidations and enhanced market power. In commenting on a draft of this report, the Board indicated that 1997 was not a typical year in terms of the quality of railroad service due to the unusual, severe congestion that occurred in the West. The Board also suggested that performance measures recently developed by the railroad industry can be helpful in measuring some aspects of service quality. In response to the these comments, we added material to the report reflecting the Board’s assessment that railroad service in 1997 was atypical and that service has improved since that time. We also revised the report to better recognize that recently developed performance measures may be helpful in measuring some aspects of service quality. However, we continue to believe that these measures are more an evaluation of railroad operating efficiency than of quality of service. Federal agencies and railroads have taken a number of actions to address the service problems that originated in the Houston/Gulf Coast area in 1997 during the implementation of the Union Pacific/Southern Pacific merger as well as service issues that are more longstanding and widespread. These actions have led to some progress, particularly the dissemination of new information regarding rail service and additional options for shippers and carriers to resolve disputes. However, in spite of the various actions to address service issues, shippers remain concerned about a lack of access of many shippers to competitive rail alternatives and the effect of this lack of competition on service levels. Shippers and railroads hold widely differing views on this key issue. The Board has tried, without success, to get the two sides to reach some agreement on this issue and has suggested that these issues are more appropriately resolved by the Congress. If the Congress decides to address this issue, it will need to weigh the potential of increased competition to improve service against the potential financial and other effects on the railroad industry. The Union Pacific/Southern Pacific system started experiencing serious service problems in July 1997 during the process of implementing the merger of the two railroads. Congestion on this system spread to the Burlington Northern and Santa Fe Railway system, affecting rail service throughout the western United States. Serious rail service disruptions and lengthy shipment delays continued throughout the last half of 1997, particularly in the Houston area. To address service problems on the Union Pacific/Southern Pacific system, Union Pacific adopted a Service Recovery Plan in September 1997. Under this plan, the railroad, among other things, took actions to reduce train movements on the Union Pacific/Southern Pacific system and manage traffic flows into congested areas, acquired additional locomotives, and hired additional train and engine crew employees. In response to growing concerns about the deteriorating quality of rail service in the West, the Board issued an emergency service order in October 1997. This order, and subsequent amendments to it, directed a number of actions aimed at resolving service problems in the Houston area, the source of the crisis. In particular, the order directed temporary changes in the way rail service was provided in and around the Houston area to provide additional options for shippers and carriers and required weekly reporting by Union Pacific on a variety of service measurements, such as system train speed and locomotive fleet size. In December 1997, the service order was expanded to require grain loading and cycle time information to be submitted by Burlington Northern and Santa Fe Railway. In August 1998, the order expired and the Board decided not to issue another emergency service order, finding that there was no longer any basis for such an order given the significant improvements in Houston area rail service. However, the Board noted that service was still not at uniformly improved levels, as reflected by congestion in Southern California. Accordingly, the Board ordered Union Pacific/Southern Pacific and Burlington Northern and Santa Fe Railway to continue the required reporting on a biweekly basis so that it could continue to monitor service levels. In December 1998, the Board discontinued this requirement, citing further service improvements and the intention of all of the Class I railroads to start issuing weekly performance reports in January 1999. As part of its oversight of the Union Pacific/Southern Pacific merger, the Board has considered requests by various parties for additional merger conditions that would modify the way in which rail service is provided in the Houston area. In its December 1998 decision, the Board announced several changes in response to these requests in order to enhance the efficiency of freight movements in the area. Most significantly, the Board authorized the joint Union Pacific/Burlington Northern and Santa Fe Railway dispatching center at Spring, Texas, to route traffic through the Houston terminal over any available route, even a route over which the owner of the train does not have operating authority. However, the Board declined to adopt a plan sponsored by a group of shippers, two affiliated railroads, and the Railroad Commission of Texas that would have displaced the current Union Pacific operations in the Houston terminal area by establishing neutral switching and dispatching operations by a third party, the Port Terminal Railroad Association, in order to increase competition in the area. According to the Board, implementing this plan would have required Union Pacific to give trackage rights to this association and all other railroads serving Houston. In making its decision not to adopt the plan, the Board concluded that the service crisis in Houston did not stem from any competitive failure of the Union Pacific/Southern Pacific merger. The Board further concluded that the plan was not necessary to remedy any merger-related harm because it would add new competitors for many shippers in the Houston area that were served by only one carrier prior to the merger and, therefore, had not experienced a decrease in competition as a result of the merger. According to the Board, absent merger-related competitive harm, such an arrangement would thus constitute “open access”—an idea that shippers should, wherever possible, be served by more than one railroad, even if, in order to produce such a system, railroads that own a majority of an area’s rail infrastructure would be required to share their property with others that do not—an action which Board officials said the law does not provide for at this time. Union Pacific has recently taken further actions aimed at improving its service levels. These actions have included decentralizing railroad operations and implementing capital and maintenance projects, such as projects to improve, expand, and maintain its railroad track. Also, in August 1998, the railroad created a new internal organization, called Network Design and Integration, which will be responsible for identifying the services most needed by shippers and developing plans for delivering them. This organization is expected to serve as a link between the marketing and operating departments, to ensure that service commitments to shippers match the railroad’s capacity to deliver these services. In December 1998, Union Pacific reported to the Board that its operations had returned to normal levels, citing its average system train speed that had risen above 17 miles per hour for the first time since July 1997, when its service crisis began. The railroad acknowledged that its service levels still needed improvement but maintained that its latest service measures demonstrated a recovery from its prior serious service problems. Federal agencies as well as railroads have recently taken a number of actions aimed at addressing freight rail service issues of a broader nature than the recent service crisis in the West. These issues include the need to foresee and prevent service problems and expeditiously resolve them when they do arise and the need to expand the capacity of the railroad system to provide service. Among the actions by federal agencies are efforts by the USDA and the Board to disseminate information that can help railroads, shippers, and receivers anticipate changes in transportation demand and supply and the adoption by the Board of new procedures allowing it to authorize temporary alternative rail service more quickly for shippers affected by serious service disruptions. In addition, individual railroads have recently made efforts to improve service through changes in their customer service organizations and increased investments in infrastructure. Finally, partly at the urging of the Board, the railroad industry has acted to address some service issues. Actions include a commitment by the Class I railroads to issue weekly measures of their service performance, an agreement between Class I railroads and grain and feed shippers to resolve some service-related disputes through binding arbitration, and an agreement between Class I and smaller railroads aimed at allowing smaller railroads to play a greater role in providing service to shippers. The rail congestion that occurred during the 1997 rail crisis in the West severely affected the movement of grain to market. This situation illustrated the need to better monitor production levels, the transportation needs of grain shippers, and the capacity of the railroads to meet those needs, so that shippers and railroads could anticipate changes in transportation demand and supply and make adjustments that could lessen the severity of such changes. To meet this need, the Board and USDA signed an agreement in May 1998 to create a Grain Logistics Task Force. This task force, made up of Board and USDA officials, was tasked with identifying and disseminating information on grain production and consumption and transportation requirements. The task force began issuing reports in August 1998 and expects to issue them five times a year. These reports contain information on such things as expected production levels of various grains (by state), grain supplies and storage capacity, and railcar loadings and the demand for rail transportation. To address long-term transportation issues facing the nation’s agriculture sector in the 21st century, USDA also held a National Agricultural Transportation Summit in Kansas City in July 1998. This meeting provided a forum for agricultural shippers and others to express their concerns about grain marketing and demand, and railroad service quality issues. A significant outcome of this summit was an agreement between USDA and DOT to create a Rural Transportation Advisory Task Force. The objectives of this task force include undertaking joint outreach to users and providers of agricultural and rural transportation services to further identify transportation challenges and ways in which these challenges can be met and considering joint research efforts and policy initiatives to address these challenges. While the scope of the task force’s responsibilities will be broad, freight rail service to the nation’s agricultural community will be a key component of its work. At hearings held by the Board in April 1998 to review issues concerning rail access and competition, shippers complained about a number of service problems, including the difficulties in seeking relief from serious service disruptions through the Board’s existing procedures. In response, the Board adopted new procedures in December 1998 providing temporary relief from serious service problems, through service from an alternative rail carrier, more quickly. Shippers and smaller railroads can seek temporary alternative service in two ways: (1) through an 8-day evidentiary process for requesting short-term emergency relief for up to 270 days or (2) through a 45-day evidentiary process for requesting longer-term relief for serious, though not emergency, service inadequacies. Prior to obtaining either type of relief, the petitioning shipper or railroad must discuss the service issues with the incumbent rail carrier and obtain the commitment from another rail carrier to meet the identified service needs. These expedited procedures do not require a showing that the rail carrier has engaged in anticompetitive conduct. Rather, the petitioning shipper or railroad must show a substantial, measurable deterioration or other demonstrated inadequacy in rail service over an identified period of time. In order to be better able to resolve service problems brought to their attention by customers, individual Class I railroads have recently taken a number of actions to improve their customer service organizations. For example, some railroads have removed their local customer service personnel from field offices and replaced them with centralized customer service centers. At these service centers, service representatives either route the customer to the appropriate department at the railroad for problem resolution or handle the calls directly. As noted previously, Union Pacific Railroad expects to improve its ability to meet its customers’ service expectations through the creation of its new organization that will serve as a link between its marketing and operating departments. In its attempts to improve customer service, Norfolk Southern has added yard operations, billing, and freight claim settlement to the responsibilities of its customer service center. Finally, Burlington Northern and Santa Fe Railway has instituted a Grain Operations Desk that serves as a point of contact for grain shippers throughout its rail system for obtaining information on the arrival of empty grain cars, improving the spotting of loaded cars, and improving overall communications between the railroad and its customers. The Class I railroads have also been attempting to improve service through capital investments to improve their infrastructure and expand their capacity to provide service. Class I railroad capital expenditures in 1997 were about 31 percent higher (in constant dollars) than they were in 1990. Rail industry officials told us that these investments are important because they help relieve capacity constraints caused by restructuring of railroad operations and the growth of traffic in recent years. Investments have included new rail yards and terminals, additional sidings and track, and additional cars and locomotives. However, these railroad representatives believe that further capital investments are needed to address service problems. Railroad officials also told us that hiring new employees is important to increase the number of train crews available. In April 1998, following its hearings on rail access and competition issues, the Board issued a decision that called on railroads and shippers to discuss and identify solutions to a number of service-related problems. One problem that the Board noted was the need for greater communications between railroads and their customers and the need for railroads to find a more systematic way of addressing customer concerns. Accordingly, the agency directed the railroads to establish formal dialogue with shippers. In response, from August through November 1998 the AAR held five meetings across the country, attended by the Board’s chairman, between Class I railroad executives and their customers to discuss service issues. At these meetings, the railroads introduced four proposed measures of railroad service predictability and asked for feedback on their usefulness. The industry had developed these measures in July 1998 in response to customer suggestions that such measures were needed. The industry maintains that these indicators will reflect the general health of each railroad and will provide an early warning of developing operational problems. The Class I railroads began making these measures available on the Internet in January 1999; they plan to update the measures weekly. In addition, AAR held a “customer service symposium” in March 1999 in order to facilitate further dialogue with shippers on aspects of service such as shipment tracking and problem resolution. Although many shippers have welcomed these efforts, some have expressed skepticism about their impact on broader transportation issues. For example, in November 1998, 27 shipper associations sent a letter to the Board noting that, while they welcomed the railroads’ efforts to improve service predictability, the meetings have not addressed shipper concerns regarding systemic issues such as the lack of competitive rail alternatives and the effectiveness of available regulatory remedies. Shippers with specific complaints regarding rail service may seek a resolution of the problem through the Board’s formal complaint adjudication process. However, in order to establish an alternative private sector process for resolving disputes between agricultural shippers and rail carriers, the National Grain and Feed Association reached an agreement with Class I railroads and the AAR in August 1998 that provides for compulsory, binding arbitration—as well as nonbinding mediation—to resolve specific types of disputes. Although this initiative was not specifically called for by the Board, the Board noted that it is consistent with its preference that private parties resolve disputes without Board involvement and the litigation that it involves. The agreement covers a wide range of grain and feed products and covers such disputes as the misrouting of loaded railcars, disputes arising from contracts, and disputes involving the application of rules governing car guarantee programs. Those parties agreeing to use this arbitration process are not obligated to arbitrate claims that exceed $200,000. Officials from one Class I railroad we spoke with said this agreement is like a small claims court for handling small rate and service problems. The agreement is not designed to handle multimillion dollar cases. The role of non-Class I railroads in providing freight service has been another issue of concern. These railroads, as well as shippers, have expressed concerns regarding obstacles, such as inadequate railcar supply and lack of alternative routings, that prevent small railroads from expanding their business and providing increased service options to their customers. In its April 1998 decision, the Board directed short line and regional railroads (collectively called small railroads) and Class I railroads to complete discussions they had begun on these problems. In September 1998, the American Short Line and Regional Railroad Association and the AAR announced that they had reached agreement on provisions aimed at giving short line and regional railroads access to new routing arrangements to develop new business. The agreement also contains guidelines for how certain fees and rates charged by Class I railroads to provide service to small railroads will be set and how revenue would be divided between Class I and smaller railroads. As part of the agreement, the railroads agreed to submit disputes regarding these provisions to binding arbitration. The president of the American Short Line and Regional Railroad Association described the agreement as a “framework of partnership and growth for years to come.” In a survey conducted by the association at the end of 1998, executives of small railroads were also optimistic but cautioned that the implementation of the agreement depended on cooperation by Class I railroads. While the actions described above have addressed some service-related issues, some shippers remain concerned regarding the systemic issue of increasing consolidation within the railroad industry. They complain that this consolidation has reduced competition within the railroad industry, leading to a situation in which many shippers are without competitive rail alternatives and must pay higher rates for inadequate service. The divergent views held by railroads and shippers on this issue make it much more difficult to address than the issues described previously. The Board is authorized to impose remedies giving shippers access to more routing options—alternative through routes, reciprocal switching, and terminal trackage rights—on a permanent basis. However, under its competitive access regulations, the shipper must demonstrate that its incumbent rail carrier has engaged in anticompetitive conduct.Specifically, the shipper must show that the carrier has used its market power to extract unreasonable terms or, because of its monopoly position, has disregarded the shipper’s needs by providing inadequate service.Some shippers have complained that this requirement is too difficult to meet, and as a result, the Board has not imposed competitive routing options where shippers believe such options are needed. Some shippers consider the requirement to demonstrate anticompetitive conduct to be the most problematic aspect of the Board’s interpretation of its statutory authority on this issue. The shippers believe that the elimination of this requirement is essential. However, the railroads believe that the demonstration of anticompetitive conduct is a necessary prerequisite to the imposition of a competitive routing option. Railroads cite concerns that increased competition imposed through regulation would undermine the industry’s ability to cover their high fixed costs and earn adequate returns. In its April 1998 decision regarding rail access and competition issues, the Board stated that it would consider whether to revise its competitive access rules. However, the Board directed that, first, railroads should arrange meetings with a broad range of shipper interests under the supervision of an administrative law judge to examine the issue. In these meetings, shippers and railroads were to try to mutually identify appropriate changes to the Board’s rules that would facilitate greater access to competitive rail alternatives where needed. In response, shippers and railroads held discussions in May and June 1998 on proposed revisions to these rules but, due to widely divergent views on the topic, could not come to any agreement. In its December 1998 report to Members of Congress on rail access and competition issues, the Board declined to initiate further action on this issue, pointing to its adoption of new rules, described previously, that allow shippers temporary access to alternative routing options during periods of poor service. In response to the impasse between the representatives of railroads and shippers, the Board observed that the competitive access issue raises basic policy questions that are more appropriately resolved by the Congress. These questions include the appropriate role of competition, differential pricing, and how railroads earn revenues and structure their services. The Board noted that this issue is complex, and it is unclear how changes in its rules pertaining to competitive routing options would affect the nation’s rail system and the level of service provided by this system. In its December 1998 decision in the Houston/Gulf Coast oversight proceeding, the Board recognized the possibility that opening up access could fundamentally change the nation’s rail system, possibly benefitting some shippers with high-volume traffic while reducing investment elsewhere in the system and ultimately reducing or eliminating service for small, lower-volume shippers in rural areas. Board officials noted that many small, low-volume shippers have already lost service options as larger railroads shed their low-density and otherwise unprofitable lines. Fundamental differences exist between shippers and railroads on the issue of mandating additional competition in the railroad industry. If it decides to address this issue, the Congress will need to weigh the potential benefits of increased competition with the potential financial and other effects on the railroad industry. In deliberating this issue, the Congress will need to consider such things as the potential impacts of proposed changes on shipper routing options and railroad service levels as well as the rail system as a whole, including railroad revenues, infrastructure investment, capacity, and operations. In commenting on a draft of this report, the Board suggested that we modify our characterization of the 1997 service problems in the West to make clear that these problems were not the result of the Union Pacific/Southern Pacific merger and that implementation of this merger helped solve the problems. In addition, the Board suggested changes to present a more complete and precise portrayal of both its October 1997 emergency service order in response to these service problems and its December 1998 decision in the Houston/Gulf Coast oversight proceeding. Finally, the Board suggested we expand our discussion of the Board’s assessment of the possible impacts of providing “open access” throughout the nation’s rail system. In response to these comments, we revised our description of the service problems in the West to eliminate the impression that these problems were caused by the Union Pacific/Southern Pacific merger; we revised the report to provide a more complete discussion of the Board’s emergency service order and decision in the Houston/Gulf Coast oversight proceeding; and we added material to the report discussing the Board’s views on the potential impacts of implementing railroad open access. | Pursuant to a congressional request, GAO provided information on: (1) the environment within which railroad rates have been set since 1990; (2) how railroad rates have changed since 1990; (3) how railroad service quality has changed since 1990; and (4) actions taken by the Surface Transportation Board and others to address railroad service quality problems. GAO noted that: (1) the environment in which railroads set their rates has been influenced by ongoing industry consolidation, competitive conditions, and railroads' financial health; (2) as a result of mergers, bankruptcies, and the redefinition of what constitutes a major railroad, the number of independent Class I railroad systems has been reduced from 30 in 1976 to 9 in early 1999, with the 5 largest Class I railroads accounting for 94 percent of industry operating revenue; (3) this increased concentration has raised concerns about potential abuse of market power in some areas due to railroads' use of market-based pricing; (4) under market-based pricing, rail rates in markets with less effective competition may be higher than in markets that have greater competition from railroads or other modes of transportation; (5) railroads' financial health has also improved since 1990; (6) however, despite these improvements, the Board has determined that most Class I railroads are revenue inadequate because they do not generate enough revenue to cover the industry's cost of capital; (7) although such determinations are sometimes controversial, revenue inadequacy affects the ability of a railroad to attract or retain capital and remain financially viable; (8) railroad rates have generally decreased since 1990; (9) the decrease has not been uniform, and in some cases, rail rates have stayed the same as, or are higher than, they were in 1990; (10) this was particularly true on selected long distance rail shipments of wheat from northern plains states like Montana and North Dakota to west coast destinations; (11) rail routes with effective competitive alternatives--either from railroads or from trucks and barges--experienced greater decreases in rail rates; (12) as the rail industry has consolidated, shippers have complained that service quality has deteriorated; (13) shippers' complaints have included a lack of railcars when and where they were needed and inconsistent pickup and delivery of cars; (14) roughly 60 percent of the coal, grain, chemicals, and plastics shippers responding to GAO's survey said that their service was somewhat or much worse in 1997 than it was in 1990; (15) the overall quality of rail service cannot be measured; (16) federal agencies and railroads have taken a number of actions to address rail service problems; and (17) although these actions are expected to yield benefits, they do not address some shippers' belief that greater competition in the rail industry is needed to improve service. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The U.S. surface and maritime transportation systems facilitate mobility through an extensive network of infrastructure and operators, as well as through the vehicles and vessels that permit passengers and freight to move within the systems. The systems include 3.9 million miles of public roads, 121,000 miles of major private railroad networks, and 25,000 miles of commercially navigable waterways. They also include over 500 major urban public transit operators in addition to numerous private transit operators, and more than 300 ports on the coasts, Great Lakes, and inland waterways. Maintaining the transportation system is critical to sustaining America’s economic growth. Efficient mobility systems are essential facilitators of economic development—cities could not exist and global trade could not occur without systems to transport people and goods. DOT has adopted improved mobility—to “shape an accessible, affordable, reliable transportation system for all people, goods, and regions”—as one of its strategic goals. To achieve this goal, it has identified several desired outcomes, including (1) improving the physical condition of the transportation system, (2) reducing transportation time from origin to destination, (3) increasing the reliability of trip times, (4) increasing access to transportation systems, and (5) reducing the cost of transportation services. The relative roles, responsibilities, and revenue sources of each sector involved in surface and maritime transportation activities—including the federal government, other levels of government, and the private sector— vary across modes. For public roads, ownership is divided among federal, state, and local governments—over 77 percent of the roads are owned by local governments; 20 percent are owned by the states, including most of the Interstate Highway System; and 3 percent are owned by the federal government. While the federal government owns few roads, it has played a major role in funding the nation’s highways. For example, from 1954 through 2001, the federal government invested over $370 billion (in constant 2001 dollars) in the Interstate Highway System. With the completion of the interstate system in the 1980s—and continuing with passage of the Intermodal Surface Transportation Efficiency Act of 1991 (ISTEA) and its successor legislation, TEA-21, in 1998—the federal government shifted its focus toward preserving and enhancing the capacity of the system. Under the Federal Aid Highway Program, the Federal Highway Administration (FHWA) provides funds to states to construct, improve, and maintain the interstate highway system and other parts of the U.S. road network and to replace and rehabilitate bridges. TEA-21 established, among other things, a mechanism for ensuring that the level of federal highway program funds distributed to the states would be more closely linked than before to the highway user tax receipts credited to the Highway Account of the Highway Trust Fund. These user taxes include excise taxes on motor fuels (gasoline, gasohol, diesel, and special fuels) and truck-related taxes on truck tires, sales of trucks and trailers, and the use of heavy vehicles. FHWA distributes highway program funds to the states through annual apportionments according to statutory formulas that consider a variety of factors including vehicles miles traveled on the interstate system, motor fuel usage by each state’s highway users, and other factors. The federal share for project funding is usually 80 percent but can vary among programs, road types, and states. State and local governments then “match” federal funds with funds from other sources, such as state or local revenues. While the federal government’s primary role has been to provide capital funding for the interstate system and other highway projects, state and local governments provide the bulk of the funding for public roads in the United States and are responsible for operating and maintaining all nonfederal roads including the interstate system. The sources of state highway revenues include user charges, such as taxes on motor fuels and motor vehicles and tolls; proceeds of bond issues; General Fund appropriations; and other taxes and investment income. The sources of local highway revenues include many of the user charges and other sources used by state governments, as well as property taxes and assessments. The U.S. transit system includes a variety of multiple-occupancy vehicle services designed to transport passengers on local and regional routes. Capital funding for transit came from the following sources in 2000: 47 percent of the total came from the federal government, 27 percent from transit agencies and other nongovernmental sources, 15 percent from local governments, and 11 percent from states. In that same year, the sources of operating funds for transit included passenger fares (36 percent of operating funds); state governments (20 percent); local governments (22 percent); other funds directly generated by transit agencies and local governments through taxes, advertising, and other sources (17 percent); and the federal government (5 percent). The Federal Transit Administration (FTA) provides financial assistance to states and local transit operators to develop new transit systems and improve, maintain, and operate existing systems. This assistance includes (1) formula grants to provide capital and operating assistance to urbanized and nonurbanized areas and to organizations that provide specialized transit services to the elderly and disabled persons; (2) competitive capital investment grants for constructing new fixed guideway systems and extensions to existing ones, modernizing fixed guideway systems, and investing in buses and bus-related facilities; (3) assistance for transit planning and research; and (4) grants to local governments and nonprofit organizations to connect low-income persons and welfare recipients to jobs and support services. Funding for federal transit programs is generally provided on an 80 percent/20 percent federal to local match basis. Federal support for transit projects comes from the Highway Trust Fund’s highway and transit accounts and from the General Fund of the U.S. Treasury. The respective roles of the public and private sector and the revenue sources vary for passenger as compared with freight railroads. With regard to passengers, the Rail Passenger Service Act of 1970 created Amtrak to provide intercity passenger rail service because existing railroads found such service unprofitable. Since its founding, Amtrak has rebuilt rail equipment and benefited from significant public investment in track and stations, especially in the Northeast corridor, which runs between Boston, Mass., and Washington, D.C. The federal government, through the Federal Railroad Administration (FRA), has provided Amtrak with $39 billion (in 2000 dollars) for capital and operating expenses from 1971 through 2002. Federal payments are a significant revenue source for Amtrak’s capital budget, but not its operating budget. In fiscal year 2001, for example, the sources of Amtrak’s capital funding were private sector debt financing (59 percent of total revenues), the federal government (36 percent), and state and local transportation agencies (5 percent). In that same year, the sources of funding for Amtrak’s operating budget were passenger fares (59 percent of total revenues), other business activities and commuter railroads (34 percent), and the federal government and state governments (7 percent). The role of the federal government in providing financial support to Amtrak is currently under review amid concerns about the corporation’s financial viability and discussions about the future direction of federal policy toward intercity rail service. With regard to freight, the private sector owns, operates, and provides almost all of the financing for freight railroads. Since the 1970s, the railroad industry has experienced many changes including deregulation and industry consolidation. Currently, the federal government plays a relatively small role in financing freight railroad infrastructure by offering some credit assistance to state and local governments and railroads for capital improvements. The U.S. maritime transportation system primarily consists of waterways, ports, the intermodal connections (e.g., inland rail and roadways) that permit passengers and cargo to reach marine facilities, and the vessels and vehicles that move cargo and people within the system. The maritime infrastructure is owned and operated by an aggregation of state and local agencies and private companies, with some federal funding provided by the Corps of Engineers, the U.S. Coast Guard, and DOT’s Maritime Administration. The Corps of Engineers provides funding for projects to deepen or otherwise improve navigation channels, maintain existing waterways, and construct and rehabilitate inland waterway infrastructure, primarily locks and dams. Funding for channel operations and maintenance generally comes from the Harbor Maintenance Trust Fund supported by a tax on imports, domestic commodities, and other types of port usage. The costs of deepening federal channels are shared by the federal government and nonfederal entities. The Inland Waterways Trust Fund, supported by a fuel tax, funds one-half of the inland and intra-coastal capital investments. Coast Guard funding promotes (1) mobility by providing aids to navigation, icebreaking services, bridge administration, and traffic management activities; (2) security through law enforcement and border control activities; and (3) safety through programs for prevention, response, and investigation. DOT’s Maritime Administration provides loan guarantees for the construction, reconstruction, or reconditioning of eligible export vessels and for shipyard modernization and improvement. It also subsidizes the operating costs of some companies that provide maritime services and provides technical assistance to state and local port authorities, terminal operators, the private maritime industry, and others on a variety of topics (e.g., port, intermodal, and advanced cargo handling technologies; environmental compliance; and planning, management, and operations of ports). Public sector spending (in 1999 dollars) has increased for public roads and transit between fiscal years 1991 and 1999, but stayed constant for waterways and decreased for rail, as shown in figure 1. Total public sector spending for public roads increased by 18.4 percent between fiscal years 1991 and 1999, from $80.6 billion to $95.5 billion (in 1999 dollars). Of those totals, the relative shares contributed by the federal government and by state and local governments remained constant from 1991 to 1999, as shown in figure 2. Contributions from state and local governments’ own funds—that is, independent of federal grants to state and local governments—were approximately 75 percent, with the federal government contributing the remaining 25 percent. The increases in total public spending for roads reflect federal programmatic spending increases resulting from ISTEA in 1992 and TEA-21 in 1998, as well as increases in total state and local spending. In particular, since the passage of TEA-21, the federal government’s contribution to total public expenditures on roads increased by 26.8 percent (in 1999 dollars) from $21.2 billion in fiscal year 1998 to $26.9 billion in fiscal year 2000, the latest year for which federal expenditure data are available. Although data on federal expenditures are not currently available for fiscal years after 2000, federal appropriations for fiscal years 2001 and 2002 reached $32.1 billion and $33.3 billion, respectively. Federal funding increases in those years largely resulted from adjustments required by the Revenue Aligned Budget Authority (RABA) provisions in TEA-21. Since TEA-21, the federal government has shifted its focus toward preserving and enhancing the capacity of public roads, while state and local government expenditures have been focused on maintaining and operating public roads. Appendix I contains additional information on the levels of capital investment and maintenance spending by the public sector. Total public spending for transit increased by 14.8 percent between fiscal years 1991 and 1999 to just over $29 billion (in 1999 dollars). This mainly reflects increases in state and local expenditures, as federal expenditures for transit actually decreased slightly over this period to $4.3 billion in 1999. In fiscal year 2000, however, federal spending on transit increased by 21.5 percent from $4.3 billion to $5.2 billion (in 1999 dollars). Although federal data on expenditures are not currently available for fiscal years after 2000, appropriations for fiscal years 2001 and 2002 reached $6.3 billion and $6.8 billion, respectively. State and local expenditures, independent of federal grants, increased to over $24 billion in 1999, accounting for over 85 percent of total public sector expenditures for transit, a share that has increased somewhat since 1991, as shown in figure 3. Public sector spending on ports and waterways has remained between $7.2 and $7.9 billion (in 1999 dollars), between fiscal years 1991 and 1999. This spending pattern reflects fairly steady levels of federal spending by the Corps of Engineers, the Coast Guard, and the Maritime Administration for water transportation expenditures. Expenditures by the Corps of Engineers and the Coast Guard comprise the bulk of federal spending for water transportation, and have remained at about $1.5 billion and $2 billion (in 1999 dollars) per year, respectively. State and local expenditures, however, increased by 27.7 percent, from $2.4 billion in fiscal year 1991 to $3.1 billion in fiscal year 1999, and accounted for about 41 percent of total public water transportation expenditures in fiscal year 1999, having grown from about 34 percent of the total in fiscal year 1991, as shown in figure 4. The public sector’s role in the funding of freight railroads is limited since the private sector owns, operates, and provides almost all of the financing for freight railroads. In addition, since public sector expenditures for commuter rail and subways are considered public transit expenditures, public expenditures discussed here for passenger rail are limited to funding for Amtrak. Federal support for Amtrak has fluctuated somewhat throughout the 1990s, but has dropped off substantially in recent years, with fiscal years 2001 and 2002 appropriations of $520 and $521 million, respectively. Sufficient data are not currently available to characterize trends in state and local governments’ spending for intercity passenger rail. The private sector plays an important role in the provision of transportation services in each mode. For example, while the private sector does not invest heavily in providing roads, it purchases and operates most of the vehicles for use on publicly provided roads. For freight rail, the private sector owns and operates most of the tracks as well as the freight trains that run on the tracks. In the maritime sector, many ports on the inland waterways are privately owned, as are freight vessels and towboats. Data on private sector expenditures on a national level are limited. However, available data show that private expenditures for transportation on roads, rail, and waterways rose throughout the 1990s. According to the U.S. Bureau of Economic Analysis’ Survey of Current Business, individuals and businesses spent about $397 billion in 2000 for the purchase of new cars, buses, trucks, and other motor vehicles, a 57-percent increase from 1993 levels (in 2000 dollars). In addition to the purchase of vehicles, the private sector also invests in and operates toll roads and lanes; however, data on these investments are not currently available on a national level. According to the Survey of Current Business, freight railroads and other businesses spent over $11 billion for railroad infrastructure and rail cars in 2000, a 66-percent increase from 1991 (in 2000 dollars). In addition, private sector investment on ships and boats more than doubled between 1991 and 2000, to about $3.7 billion (in 2000 dollars). However, private investment in waterways also includes port facilities for loading and unloading ships and for warehousing goods. Data on these investments are also currently not available on a national level. Federal projections show passenger and freight travel increasing over the next 10 years on all modes, due to population growth, increasing affluence, economic growth, and other factors. Passenger vehicle travel on public roads is expected to grow by 24.7 percent from 2000 to 2010. Passenger travel on transit systems is expected to increase by 17.2 percent over the same period. Intercity passenger rail ridership is expected to increase by 26 percent from 2001 to 2010. Finally, preliminary estimates by DOT also indicate that tons of freight moved on all surface and maritime modes—truck, rail, and water—are expected to increase by about 43 percent from 1998 through 2010, with the largest increase expected to be in tons moved by truck. However, several factors in the forecast methodologies limit their ability to capture the effects of changes in travel levels on the surface and maritime transportation systems as a whole (see app. II for more information about the travel forecast methodologies). For example, a key assumption underlying most of the national travel projections we obtained is that capacity will increase as levels of travel increase; that is, the projections are not limited by possible future constraints on capacity such as increasing congestion. On the other hand, if capacity does not increase, future travel levels may be lower than projected. In addition, differences in travel measurements hinder direct comparisons between modes and types of travel. For example, intercity highway travel is not differentiated from local travel in FHWA’s projections of travel on public roads, so projections of intercity highway travel cannot be directly compared to intercity passenger travel projections for other modes, such as rail. For freight travel, FHWA produces projections of future tonnage shipped on each mode; however, tonnage is only one measure of freight travel and does not capture important aspects of freight mobility, such as the distances over which freight moves or the value of the freight being moved. As shown in figure 5, vehicle miles traveled for passenger vehicles on public roads are projected to grow fairly steadily through 2010, by 24.7 percent over the 10-year period from 2000 through 2010, with an average annual increase of 2.2 percent. This is similar to the actual average annual rate of growth from 1991 to 2000, which was 2.5 percent. At the projected rate of growth, vehicle miles traveled would reach 3.2 trillion by 2010. The 20-year annual growth rate forecasts produced by individual states ranged from a low of 0.39 percent for Maine to a high of 3.43 percent for Utah. (See app. II for more detailed information on state forecasts.) In addition to passenger vehicles, trucks carrying freight contribute to the overall levels of travel on public roads. Vehicle miles traveled by freight trucks are also projected to increase by 2010, but such traffic makes up a relatively small share of total vehicle miles traveled. According to forecasts by FHWA, freight truck vehicle miles are expected to grow by 32.5 percent from 2000 to 2010, but will constitute less than 10 percent of total vehicle miles traveled nationwide in 2010. However, within certain corridors, trucks may account for a more substantial portion of total traffic. The projected average annual growth rate for truck travel is 2.9 percent for 2000 to 2010, compared to an actual average annual growth rate of 3.9 percent from 1991 to 2000. We discuss freight travel in more detail later in this report, after the discussion of passenger travel. For transit, FTA projects that the growth in passenger miles traveled between 2000 and 2010 will average 1.6 percent annually, for a total growth of 17.2 percent. Actual growth from 1991 through 2000 averaged 2.1 percent annually. (See fig. 6.) At the projected growth rate, annual passenger miles traveled on the nation’s transit systems would be approximately 52.9 billion by 2010. The transit forecast is a national weighted average and the individual forecasts upon which it is based vary widely by metropolitan area. For example, transit forecasts for specific urbanized areas range from a -0.05 percent average annual decrease in Philadelphia to a 3.56 percent average annual increase in San Diego. Both DOT and Amtrak project future increases in intercity passenger travel. Although automobiles dominate intercity travel, FHWA’s projections of vehicle miles traveled do not separately report long-distance travel in cars on public roads. After automobiles, airplanes and intercity buses are the next most used modes and intercity passenger rail is the least used. However, we do not report on air travel since it is outside the scope of this report, or on bus travel, because while FHWA projected increases in the number of miles traveled by all types of buses, we were unable to obtain specific projections of intercity ridership on buses. For intercity passenger rail, Amtrak predicts a cumulative increase in total ridership of 25.9 percent from 23.5 million passengers in 2001 to 29.6 million passengers in 2010, a contrast with the relatively flat ridership of recent years, which has remained between 20 and 23 million passengers per year (see app. II for further details about Amtrak’s projections). According to FHWA, FTA, and many of our panelists, a number of factors are likely to influence not only the amount of travel that will occur in the future, but also the modes travelers choose. First, the U.S. Census Bureau predicts that the country’s population will reach almost 300 million by 2010, which will result in more travelers on all modes. This population growth, and the areas in which it is expected to occur, could have a variety of effects on mode choices. In particular, the population growth that is expected in suburban areas could lead to a larger increase in travel by private vehicles than by transit because suburban areas generally have lower population densities than inner cities, and also have more dispersed travel patterns, making them harder to serve through conventional public transit. Rural areas are also expected to experience high rates of population growth and persons living there, like suburban residents, are more reliant on private vehicles and are not easily served by conventional public transit. While these demographic trends tend to decrease transit’s share of total passenger travel as compared to travel by private vehicle, the overall growth in population is expected to result in absolute increases in the level of travel on transit systems as well as by private vehicle. Another important factor that could affect mode choice is that the population aged 85 and over will increase 30 percent by 2010, according to data from the Census Bureau. The aging of the population might increase the market for demand-responsive transit services and improved road safety features, such as enhanced signage. Second, DOT officials and our panelists believed that the increasing affluence of the U.S. population would play a key role in future travel, both in overall levels and in the modes travelers choose. They noted that, as income rises, people tend to take more and longer trips, private vehicle ownership tends to increase, and public transit use generally decreases. Third, communication technology could affect local and intercity travel, but the direction and extent of the effect is uncertain. For example, telecommuting and videoconferencing are becoming more common, but are not expected to significantly replace face-to-face meetings unless the technology improves substantially. Finally, changes in the price (or perceived price), condition, and reliability of one modal choice as compared to another are also likely to affect levels of travel and mode choices. For example, changes in the petroleum market that affect fuel prices, or changes in government policy that affect the cost of driving or transit prices could result in shifts between personal vehicles and transit; however, it is difficult to predict the extent to which these changes would occur. Also, if road congestion increases, there could be a shift to transit or a decrease in overall travel. See appendix III for a more detailed discussion of these factors. Trucks move the majority of freight tonnage and are expected to continue moving the bulk of freight into the future. FHWA’s preliminary forecasts of international and domestic freight tonnage across all surface and maritime modes project that total freight moved will increase 43 percent, from 13.5 billion tons in 1998 to 19.3 billion tons in 2010. According to the forecasts, by 2010, 14.8 billion tons are projected to move by truck, a 47.6-percent increase; 3 billion tons by rail, a 31.8-percent increase; and 1.5 billion tons by water, a 26.6-percent increase, as shown in figure 7. Trucks are expected to remain the dominant mode, in terms of tonnage, because production of the commodities that typically move by truck, such as manufactured goods, is expected to grow faster than the main commodities moved by rail or on water, such as coal and grain. Tonnage is only one measure of freight travel and does not capture important aspects of freight mobility, such as the distances over which freight moves or the value of the freight being moved. Ton-miles measure the amount of freight moved as well as the distance over which it moves, and historically, rail has been the dominant mode in terms of ton-miles for domestic freight. In 1998, the base year of FHWA’s projections, domestic rail ton-miles totaled over 1.4 trillion, while intercity truck ton-miles totaled just over one trillion, and domestic ton-miles on the waterways totaled 672.8 billion. Air is the dominant mode in terms of value per ton according to DOT’s Transportation Statistics Annual Report 2000, at $51,000 per ton (in 1997 dollars). However, in terms of total value, trucks are the dominant mode. According to the Annual Report, trucks moved nearly $5 trillion (in 1997 dollars) in domestic goods, as opposed to $320 billion by rail and less than $100 billion by inland waterway. International freight is an increasingly important aspect of the U.S. economy. For international freight, water is the dominant mode in terms of tonnage. According to a DOT report, more than 95 percent of all overseas products and materials that enter or leave the country move through ports and waterways. More specifically, containers, which generally carry manufactured commodities such as consumer goods and electrical equipment and can be easily transferred to rail or truck, dominate in terms of value, accounting for 55 percent of total imports and exports, while only accounting for 12 percent of foreign tonnage. Containers are the fastest growing segment of the maritime sector. While FHWA predicts that total maritime freight tonnage will grow by 26.6 percent, the Corps of Engineers projects that volumes of freight moving in containers will increase by nearly 70 percent by 2010. In addition, ships designed to carry containers are the fastest growing segment of the maritime shipping fleet and are also increasing in size. Although freight vessels designed to carry bulk freight (e.g., coal, grain, or oil) are the largest sector of the freight vessel fleet, the number of containerships is increasing by 8.8 percent annually, which is double the growth rate of any other type of vessel according to the Corps of Engineers. Also, most of the overall capacity of the containership fleet is now found in larger containerships, with a capacity of more than 3,000 twenty-foot containers, and ships with capacities of three times that amount are currently on order. According to reports by the Transportation Research Board and the Bureau of Transportation Statistics, increasing international trade and economic growth are expected to influence volumes of future freight travel. In addition, the increasing value of cargo shipped and changes in policies affecting certain commodities can affect overall levels of freight traffic as well as the choice of mode for that traffic. The North American Free Trade Agreement has contributed to the increases in tonnage of imports by rail (24-percent increase) and by truck (20-percent increase), from Mexico and Canada between 1996 and 2000, while expanding trade with the Pacific Rim has increased maritime traffic at west coast container ports. With increasing affluence, economic growth often results in a greater volume of goods produced and consumed, leading to more freight moved, particularly higher-value cargo. In addition, the increasing value of cargo affects the modes on which that cargo is shipped. High-value cargo, such as electronics and office equipment, tends to be shipped by air or truck, while rail and barges generally carry lower-value bulk items like coal and grains. Changes in environmental regulations and other policies also affect the amount, cost, and mode choice for moving freight. For example, a change in demand for coal due to stricter environmental controls could affect rail and water transportation, the primary modes for shipping coal. See appendix III for a more detailed discussion of the factors that influence freight travel. To identify key mobility challenges and the strategies for addressing those challenges that are discussed later in this report, we relied upon the results of two panels of surface and maritime transportation experts that we convened in April 2002, as well as reports prepared by federal and other government agencies, academics, and industry groups. According to our expert panelists and other sources, with increasing passenger and freight travel, the surface and maritime transportation systems face a number of challenges that involve ensuring continued mobility while maintaining a balance with other social goals, such as environmental preservation. Ensuring continued mobility involves preventing congestion from overwhelming the transportation system and ensuring access to transportation for certain underserved populations. In particular, more travel can lead to growing congestion at bottlenecks and at peak travel times on public roads, transit systems, freight rail lines, and at freight hubs such as ports and borders where freight is transferred from one mode to another. In addition, settlement patterns and dependence on the automobile limit access to transportation systems for some elderly people and low-income households, and in rural areas where populations are expected to expand. Increasing travel levels can also negatively affect the environment and communities by increasing the levels of air, water, and noise pollution. Many panelists explained that congestion is generally growing for passenger and freight travel and will continue to increase at localized bottlenecks (places where the capacity of the transportation system is most limited), at peak travel times, and on all surface and maritime transportation modes to some extent. However, panelists pointed out that transportation systems as a whole have excess capacity and that communities may have different views on what constitutes congestion. Residents of small cities and towns may perceive significant congestion on their streets that may be considered insignificant to residents in major metropolitan areas. In addition, because of the relative nature of congestion, its severity is difficult to determine or to measure and while one measure may be appropriate for some situations, it may be inadequate for describing others. For local urban travel, a study by the Texas Transportation Institute showed that the amount of traffic experiencing congestion in peak travel periods doubled from 33 percent in 1982 to 66 percent in 2000 in the 75 metropolitan areas studied. In addition, the average time per day that roads were congested increased over this period, from about 4.5 hours in 1982 to about 7 hours in 2000. Increased road congestion can also affect public bus and other transit systems that operate on roads. Some transit systems are also experiencing increasing rail congestion at peak travel times. For example, the Washington Metropolitan Area Transit Authority’s (WMATA) recent studies on crowding found that rail travel demand has reached and, in some cases, exceeded scheduled capacity—an average of 140 passengers per car—during the peak morning and afternoon hours. Of the more than 200 peak morning rail trips that WMATA observed over a recent 6-month period, on average, 15 percent were considered “uncomfortably crowded” (125 to 149 passengers per car) and 8 percent had “crush loads” (150 or more passengers per car). In addition to local travel, concerns have been raised about how intercity and tourist travel interacts with local traffic in metropolitan areas and in smaller towns and rural areas, and how this interaction will evolve in the future. According to a report sponsored by the World Business Council for Sustainable Development, Mobility 2001, capacity problems for intercity travelers are generally not severe outside of large cities, except in certain heavily traveled corridors, such as the Northeast corridor, which links Washington, D.C., New York, and Boston. However, at the beginning and end of trips, intercity bus and automobile traffic contribute to and suffer from urban congestion. In addition, the study said that intercity travel may constitute a substantial proportion of total traffic passing through smaller towns and rural areas. Also, according to a GAO survey of all states, state officials are increasingly concerned about traffic volumes on interstate highways in rural areas, and high levels of rural congestion are expected in 18 states within 10 years. Congestion is also expected to increase on major freight transportation networks at specific bottlenecks, particularly where intermodal connections occur, and at peak travel times, according to the panelists. They expressed concern regarding interactions between freight and passenger travel and how increases in both types of travel will affect mobility in the future. Trucks contribute to congestion in metropolitan areas where they generally move on the same roads and highways as personal vehicles, particularly during peak periods of congestion. In addition, high demand for freight, particularly freight moved on trucks, exists in metropolitan areas where overall congestion tends to be the worst. With international trade an increasing part of the economy and with larger containerships being built, some panelists indicated that more pressure will be placed on the already congested road and rail connections to major U.S. seaports and at the border crossings with Canada and Mexico. For example, according to a DOT report, more than one-half of the ports responding to a 1997 survey of port access issues identified traffic impediments on local truck routes as the major infrastructure problem. According to one panelist from the freight rail industry, there is ample capacity on most of the freight rail network. However, railroads are beginning to experience more severe capacity constraints in particular heavily used corridors, such as the Northeast corridor, and within major metropolitan areas, especially where commuter and intercity passenger rail services share tracks with freight railroads. Capacity constraints at these bottlenecks are expected to worsen in the future. The panelist explained that congestion on some freight rail segments where the tracks are also used for passenger rail service—for which there is growing demand— reduces the ability of freight railroads to expand service on the existing tracks to meet the growing demand for freight movements on those segments. On the inland waterways, according to two panelists from that industry, there is sufficient capacity on most of the inland waterway network, although congestion is increasing at small, aging, and increasingly unreliable locks. According to the Corps of Engineers, the number of hours that locks were unavailable due to lock failures increased in recent years, from about 35,000 hours in 1991 to 55,000 hours in 1999, occurring primarily on the upper Mississippi and Illinois rivers. In addition, according to a Corps of Engineers analysis of congestion on the inland waterways, with expected growth in freight travel, 15 locks would exceed 80 percent of their capacity by 2020, as compared to 4 that had reached that level in 1999. According to our expert panelists, while increasing passenger and freight travel contribute to increasing congestion at bottlenecks and at peak travel times, other systemic factors contribute to congestion, including barriers to building enough capacity to accommodate growing levels of travel, challenges to effectively managing and operating transportation systems, and barriers in effectively managing how, and the extent to which, transportation systems are used. At bottlenecks and at peak travel times, there is insufficient capacity to accommodate the levels of traffic attempting to use the infrastructure. One reason for the insufficient capacity is that transportation infrastructure, which is generally publicly provided (with the major exception of freight railroads), can take a long time to plan and build, and it may not be possible to build fast enough to keep pace with increasing and shifting travel patterns. In addition, constructing new capacity is often costly and can conflict with other social goals such as environmental preservation and community maintenance. As a result, approval of projects to build new capacity, which requires environmental impact statements and community outreach, generally takes a long time, if it is obtained at all. In addition, a number of panelists indicated that funding and planning rigidities in the public institutions responsible for providing transportation infrastructure tend to promote one mode of transportation, rather than a set of balanced transportation choices. Focus on a single mode can result in difficulties dealing effectively with congestion. For example, as suburban expressways enable community developments to grow and move farther out from city centers, jobs and goods follow these developments. This results in increasing passenger and freight travel on the expressways, and a shifting of traffic flows that may not easily be accommodated by existing transportation choices. One panelist indicated that suburban expressways are among the least reliable in terms of travel times because, if congestion occurs, there are fewer feasible alternative routes or modes of transportation. In addition, some bottlenecks occur where modes connect, because funding is generally mode-specific, and congestion at these intermodal connections is not easily addressed. According to FHWA, public sector funding programs are generally focused on a primary mode of transportation, such as highways, or a primary purpose, such as improving air quality. This means that intermodal projects may require a broader range of funding than might be available under a single program. Panelists also noted that the types of congestion problems that are expected to worsen in the future involve interactions between long- distance and local traffic and between passengers and freight, and existing institutions may not have the capacity or the authority to address them. For example, some local bottlenecks may hinder traffic that has regional or national significance, such as national freight flows from major coastal ports, or can affect the economies and traffic in more than one state. Current state and local planning organizations may have difficulty considering all the costs and benefits related to national or international traffic flows that affect other jurisdictions as well as their own. The concept of capacity is broader than just the physical characteristics of the transportation network (e.g., the number of lane-miles of road). The capacity of transportation systems is also determined by how well they are managed and operated (particularly publicly owned and operated systems), and how the use of those systems is managed. Many factors related to the management and operation of transportation systems can contribute to increasing congestion. Many panelists said that congestion on highways was in part due to poor management of traffic flows on the connectors between highways and poor management in clearing roads that are blocked due to accidents, inclement weather, or construction. For example, in the 75 metropolitan areas studied by the Texas Transportation Institute, 54 percent of annual vehicle delays in 2000 were due to incidents such as breakdowns or crashes. In addition, the Oak Ridge National Laboratory reported that, nationwide, significant delays are caused by work zones on highways; poorly timed traffic signals; and snow, ice, and fog. In addition, according to a number of panelists, congestion on transportation systems is also in part due to inefficient pricing of the infrastructure because users—whether they are drivers on a highway or barge operators moving through a lock—do not pay the full costs they impose on the system and on other users for their use of the system. They further argued that if travelers and freight carriers had to pay a higher cost for using transportation systems during peak periods to reflect the full costs they impose, they would have an incentive to avoid or reschedule some trips and to load vehicles more fully, resulting in less congestion. Congestion affects travel times and the reliability of transportation systems. As discussed earlier in this report, the Texas Transportation Institute found that 66 percent of peak period travel on roadways was congested in 2000, compared to 33 percent in 1982 in the 75 metropolitan areas studied. According to the study, this means that two of every three vehicles experience congestion in their morning or evening commute. In the aggregate, congestion results in thousands of hours of delay every day, which can translate into costs such as lost productivity and increased fuel consumption. In addition, a decrease in travel reliability imposes costs on the traveler in terms of arriving late to work or for other appointments, and in raising the cost of moving goods resulting in higher prices for consumers. Some panelists noted that congestion, in some sense, reflects full use of transportation infrastructure, and is therefore not a problem. In addition, they explained that travelers adjust to congestion and adapt their travel routes and times, as well as housing and work choices, to avoid congestion. For example, according to the Transportation Statistics Annual Report 2000, median commute times increased about 2 minutes between 1985 and 1999, despite increases in the percentage of people driving to work alone and the average commuting distance. For freight travel, one panelist made a similar argument, citing that transportation costs related to managing business operations have decreased as a percentage of gross national product, indicating that producers and manufacturers adjust to transportation supply, by switching modes or altering delivery schedules to avoid delays and resulting cost increases. However, the Mobility 2001 report describes these adaptations by individuals and businesses as economic inefficiencies that can be very costly. According to the report, increasing congestion can cause avoidance of a substantial number of trips resulting in a corresponding loss of the benefits of those trips. In addition to negative economic effects, travelers’ adaptation to congested conditions can also have a number of negative social effects on other people. For example, according to researchers from the Texas Transportation Institute, traffic cutting through neighborhoods to avoid congestion can cause community disruptions and “road rage” can be partly attributed to increasing congestion. The FHWA and FTA’s 1999 Conditions and Performance report states that significant accessibility barriers persist for some elderly people and low- income households. In addition, several panelists stated that rural populations also face accessibility difficulties. According to the Conditions and Performance report, the elderly have different mobility challenges than other populations because they are less likely to have drivers’ licenses, have more serious health problems, and may require special services and facilities. According to 1995 data, 45 percent of women and 16 percent of men over age 75 did not have drivers’ licenses, which may limit their ability to travel by car. Many of the elderly also may have difficulty using public transportation due to physical ailments. People who cannot drive themselves tend to rely on family, other caregivers, or friends to drive them, or find alternative means of transportation. As a result, according to the 1999 Conditions and Performance report and a 1998 report about mobility for older drivers, they experience increased waiting times, uncertainty, and inconvenience, and they are required to do more advance trip planning. These factors can lead to fewer trips taken for necessary business and for recreation, as well as restrictions on times and places that health care can be obtained. Access to more flexible, demand-responsive forms of transit could enhance the mobility of the elderly, particularly in rural areas, which are difficult to serve through transit systems; however, some barriers to providing these types of services exist. For example, according to one of our panelists, some paratransit services are not permitted to carry able-bodied people, even if those people are on the route and are willing to pay for the service. As the elderly population increases over the next 10 years, issues pertaining to access are expected to become more prominent in society. Lower income levels can also be a significant barrier to transportation access. The cost of purchasing, insuring, and maintaining a car is prohibitive to some households, and 26 percent of low-income households do not own a car, compared with 4 percent of other households, according to the 1999 Conditions and Performance report. Among all low-income households, about 8 percent of trips are made in cars that are owned by others as compared to 1 percent for other income groups. Furthermore, the same uncertainties and inconveniences apply to this group as to the elderly regarding relying on others for transportation. Transportation access is important for employment opportunities to help increase income, yet this access is not always available. This is because growth in employment opportunities tends to occur in the suburbs and outlying areas, while many low-income populations are concentrated in the inner cities or in rural areas. In case studies of access to jobs for low-income populations, FTA researchers found that transportation barriers to job access included gaps in transit service, lack of knowledge of where transit services are provided, and high transportation costs resulting from multiple transfers and long distances traveled. Another problem they noted was the difficulty in coordinating certain types of work shifts with the availability of public transportation service. Without sufficient access to jobs, families face more obstacles to achieving the goal of independence from government assistance. Limited transportation access can also reduce opportunities for affordable housing and restrict choices for shopping and other services. Rural populations, which according to the 2000 Census grew by 10 percent over the last 10 years, also face access problems. Access to some form of transportation is necessary to connect rural populations to jobs and other amenities in city centers or, increasingly, in the suburbs. The Mobility 2001 report states that automobiles offer greater flexibility in schedule and choice of destinations than other modes of transportation, and often also provide shorter travel times with lower out-of-pocket costs. The report also notes that conventional transit systems are best equipped to serve high levels of travel demand that is concentrated in a relatively limited area or along well-defined corridors, such as inner cities and corridors between those areas and suburbs. Trips by rural residents tend to be long due to low population densities and the relative isolation of small communities. Therefore, transportation can be a challenge to provide in rural areas, especially for persons without access to private automobiles. A report prepared for the FTA in 2001 found that 1 in 13 rural residents lives in a household without a personal vehicle. In addition, the elderly made 31 percent of all rural transit trips in 2000 and persons with disabilities made 23 percent. However, according to a report by the Coordinating Council on Access and Mobility, while almost 60 percent of all nonmetropolitan counties had some public transportation services in 2000, many of these operations were small and offered services to limited geographic areas during limited times. While ISTEA and TEA-21 provided funds aimed at mitigating adverse effects of transportation, concerns persist about such effects on the environment and communities. As a result of the negative consequences of transportation, tradeoffs must be made between facilitating increased mobility and giving due regard to environmental and other social goals. For example, transportation vehicles are major sources of local, urban, and regional air pollution because they depend on fossil fuels to operate. Emissions from vehicles include sulfur dioxide, lead, carbon monoxide, volatile organic compounds, particulate matter, and nitrous oxides. In addition, the emission of greenhouse gases such as carbon dioxide, methane, and nitrous oxide are increasing and greenhouse gases have been linked to reduction in atmospheric ozone and climate changes. According to Mobility 2001, improved technologies can help reduce per-vehicle emissions, but the increasing numbers of vehicles traveling and the total miles traveled may offset these gains. In addition, congested conditions on highways tend to exacerbate the problem because extra fuel is consumed due to increased acceleration, deceleration, and idling. Vehicle emissions in congested areas can trigger respiratory and other illnesses, and runoff from impervious surfaces can carry lawn chemicals and other pollutants into lakes, streams, and rivers, thus threatening aquatic environments. Freight transportation also has significant environmental effects. Trucks are significant contributors to air pollution. According to the American Trucking Association, trucks were responsible for 18.5 percent of nitrous oxide emissions and 27.5 percent of other particulate emissions from mobile sources in the United States. The Mobility 2001 report states that freight trains also contribute to emissions of hydrocarbons, carbon monoxide, and nitrous oxide, although generally at levels considerably lower than trucks. In addition, while large shipping vessels are more energy efficient than trucks or trains, they are also major sources of nitrogen, sulfur dioxide, and diesel particulate emissions. According to the International Maritime Organization, ocean shipping is responsible for 22 percent of the wastes dumped into the sea on an annual basis. Barges moving freight on the inland waterway system are among the most energy efficient forms of freight transportation, contributing relatively lower amounts of noxious emissions compared with trucks and freight trains, according to the Corps of Engineers. However, the dredging and damming required to make rivers and harbors navigable can cause significant disruption to ecosystems. Noise pollution is another factor exacerbated by increasing levels of transportation. While FHWA, FTA, and many cities have established criteria for different land uses close to highways and rail lines to protect against physically damaging noise levels, average noise levels caused by road traffic in some areas can still have adverse consequences on people’s hearing. In addition, several studies have found that residential property values decrease as average noise levels rise above a certain threshold. Freight also contributes to noise pollution. According to Mobility 2001, shipping is the largest source of low-frequency, underwater noise, which may have adverse effects on marine life, although these effects are not yet fully understood. These noise levels are particularly serious on highly trafficked shipping routes. In addition, dredging also contributes to noise pollution. Growing awareness of the environmental and social costs of transportation projects is making it more difficult to pursue major transportation improvements. According to a number of panelists, the difficulty in quantifying and measuring the costs and benefits of increased mobility also hinders the ability of transportation planners to make a strong case to local decisionmakers for mobility improvements. In addition, transportation planning and funding is mode-specific and oriented toward passenger travel, which hinders transportation planners’ ability to recognize systemwide and multi-modal strategies for addressing mobility needs and other social concerns. The panelists presented numerous approaches for addressing the types of challenges discussed throughout this report, but they emphasized that no single strategy would be sufficient. From these discussions and our other research, we have identified three key strategies that may aid transportation decisionmakers at all levels of government in addressing mobility challenges and the institutional barriers that contribute to them. These strategies include the following: 1. Focus on the entire surface and maritime transportation system rather than on specific modes or types of travel to achieve desired mobility outcomes. A systemwide approach to transportation planning and funding, as opposed to focus on a single mode or type of travel, could improve focus on outcomes related to customer or community needs. 2. Use a full range of tools to achieve those desired outcomes. Controlling congestion and improving access will require a strategic mix of construction, corrective and preventive maintenance, rehabilitation, operations and system management, and managing system use through pricing and other techniques. 3. Provide more options for financing mobility improvements and consider additional sources of revenue. Targeting financing to transportation projects that will achieve desired mobility outcomes might require more options for raising and distributing funds for surface and maritime transportation. However, using revenue sources that are not directly tied to the use of transportation systems could allow decisionmakers to bypass transportation planning requirements which, in turn, could limit the ability of transportation agencies to focus on and achieve desired outcomes. Some panelists said that mobility should be viewed on a systemwide basis across all modes and types of travel. Addressing the types of mobility challenges discussed earlier in this report can require a scope beyond a local jurisdiction or a state line and across more than one mode or type of travel. For example, congestion challenges often occur where modes connect or should connect—such as ports or freight hubs where freight is transferred from one mode to another, or airports that passengers need to access by car, bus, or rail. These connections require coordination of more than one mode of transportation and cooperation among multiple transportation providers and planners, such as port authorities, metropolitan planning organizations (MPO), and private freight railroads. Some panelists therefore advocated shifting the focus of government transportation agencies at the federal, state, and local levels to consider all modes and types of travel in addressing mobility challenges—as opposed to focusing on a specific mode or type of travel in planning and implementing mobility improvements. Some panelists said that current transportation planning institutions, such as state transportation departments, MPOs, or Corps of Engineers regional offices, may not have sufficient expertise, or in some cases, authority to effectively identify and implement mobility improvements across modes or types of travel. They suggested that transportation planning by all entities focus more closely on regional issues and highlighted the importance of cooperation and coordination among modal agencies at the federal, state, and local level, between public and private transportation providers, and between transportation planning organizations and other government and community agencies to address transportation issues. For example, several panelists said that the Alameda Corridor in Los Angeles is a good example of successful cooperation and coordination among agencies. This corridor is designed to improve freight mobility for cargo coming into the ports of Los Angeles and Long Beach and out to the rest of the country. Planning, financing, and building this corridor required cooperation among private railroads, the local port authorities, the cities of Los Angeles and Long Beach, community groups along the entire corridor, the state of California, and the federal government. Several panelists said that a greater understanding of the full life-cycle costs and benefits of various mobility improvements is needed to take a more systemwide approach to transportation planning and funding. The panelists said the cost-benefit frameworks that transportation agencies currently use to evaluate various transportation projects could be more comprehensive in considering a wider array of social and economic costs and benefits, recognizing transportation systems’ links to each other and to other social and financial systems. Many panelists advocated a systemwide, rather than mode-specific, approach to transportation planning and funding that could also improve focus on outcomes that users and communities desire from the transportation system. For example, one panelist described a performance oriented funding system, in which the federal government would first define certain national interests of the transportation system—such as maintaining the entire interstate highway system or identifying freight corridors of importance to the national economy—then set national performance standards for those systems that states and localities must meet. Federal funds would be distributed to those entities that are addressing national interests and meeting the established standards. Any federal funds remaining after meeting the performance standards could then be used for whatever transportation purpose the state or locality deems most appropriate to achieve state or local mobility goals. Another panelist expanded the notion of setting national performance standards to include a recognition of the interactions between transportation goals and local economic development and quality of life goals, and to allow localities to modify national performance goals given local conditions. For example, a national performance standard, such as average speeds of 45 miles per hour for highways, might be unattainable for some locations given local conditions, and might run contrary to other local goals related to economic development. Some panelists described several other types of systems that could focus on outcomes. For example, one panelist suggested a system in which federal support would reward those states or localities that apply federal money to gain efficiencies in their transportation systems, or tie transportation projects to land use and other local policies to achieve community and environmental goals, as well as mobility goals. Another panelist described a system in which different federal matching criteria for different types of expenditures might reflect federal priorities. For example, if infrastructure preservation became a higher national priority than building new capacity, matching requirements could be changed to a 50 percent federal share for building new physical capacity and an 80 percent federal share for preservation. Other panelists suggested that requiring state and local governments to pay for a larger share of transportation projects might provide them with incentives to invest in more cost-effective projects. If cost savings resulted, these entities might have more funds available to address other mobility challenges. Some of the panelists suggested reducing the federal match for projects in all modes to give states and localities more fiscal responsibility for projects they are planning. Other panelists also suggested that federal matching requirements should be equal for all modes to avoid creating incentives to pursue projects in one mode that might be less effective than projects in other modes. Many panelists emphasized that using a range of various tools to address mobility challenges may help control congestion and improve access. This involves a strategic mix of construction, corrective and preventive maintenance, rehabilitation, operations and system management, and managing system use through pricing or other techniques. Many of the panelists said that no one type of technique would be sufficient to address mobility challenges. Although these techniques are currently in use, panelists indicated that planners should more consistently consider a full range of techniques. Building additional infrastructure is perhaps the most familiar technique for addressing congestion and improving access to surface and maritime transportation. Several panelists expressed the view that although there is a lot of unused capacity in the transportation system, certain bottlenecks and key corridors require new infrastructure. However, building new infrastructure cannot completely eliminate congestion. For example, according to the Texas Transportation Institute, it would require at least twice the level of current road expansion to keep traffic congestion levels constant, if that were the only strategy pursued. In addition, while adding lanes may be a useful tool to deal with highway congestion for states with relatively low population densities, this option may not be as useful or possible for states with relatively high population densities—particularly in urban areas, where the ability to add lanes is limited due to a shortage of available space. Furthermore, investments in additional transportation capacity can stimulate increases in travel demand, sometimes leading to congestion and slower travel speeds on the new or improved infrastructure. Other panelists said that an emphasis on enhancing capacity from existing infrastructure through increased corrective and preventive maintenance and rehabilitation is an important supplement to, and sometimes a substitute for, building new infrastructure. In 1999, the President’s Commission to Study Capital Budgeting reported that, because infrastructure maintenance requires more rapid budgetary spending than new construction and has a lower visibility, it is less likely to be funded at a sufficient level. However, one panelist said that for public roads, every dollar spent on preventive maintenance when the roads are in good condition saves $4 to $5 over what would have to be spent to maintain roads in fair condition or $10 to maintain roads once they are in poor condition. Maintaining and rehabilitating transportation systems can improve the speed and reliability of passenger and freight travel, thereby optimizing capital investments. Better management and operation of existing surface and maritime transportation infrastructure is another technique for enhancing mobility advocated by some panelists. Improving management and operations may allow the existing transportation system to accommodate additional travel without having to add new infrastructure. For example, the Texas Transportation Institute reported that coordinating traffic signal timing with changing traffic conditions could improve flow on congested roadways. In addition, according to an FHWA survey, better management of work zones—which includes accelerating construction activities to minimize their effects on the public, coordinating planned and ongoing construction activities, and using more durable construction materials— can reduce traffic delays caused by work zones and improve traveler satisfaction. Also, according to one panelist, automating the operation of locks and dams on the inland waterways could reduce congestion at these bottlenecks. Another panelist, in an article that he authored, noted that shifting the focus of transportation planning from building capital facilities to an “operations mindset” will require a cultural shift in many transportation institutions, particularly in the public sector, so that the organizational structure, hierarchy, and rewards and incentives are all focused on improving transportation management and operations. He also commented on the need to improve performance measures related to operations and management so that both the quality and the reliability of transportation services are measured. Several panelists suggested that contracting out a greater portion of operations and maintenance activities could allow public transportation agencies to focus their attention on improving overall management and developing policies to address mobility challenges. This practice could involve outsourcing operations and maintenance to private entities through competitive bidding, as is currently done for roads in the United Kingdom. In addition, by relieving public agencies of these functions, contracting could reduce the cost of operating transportation infrastructure and improve the level of service for each dollar invested for publicly owned transportation systems, according to one panelist. Developing comprehensive strategies for reducing congestion caused by incidents is another way to improve management and operation of surface and maritime transportation modes. According to the Texas Transportation Institute, incidents such as traffic accidents and breakdowns cause significant delays on roadways. One panelist said that some local jurisdictions are developing common protocols for handling incidents that affect more than one mode and transportation agency, such as state transportation departments and state and local law enforcement, resulting in improved communications and coordination among police, firefighters, medical personnel, and operators of transportation systems. Examples of improvements to incident management include employing roving crews to quickly move accidents and other impediments off of roads and rail and implementing technological improvements that can help barges on the inland waterways navigate locks in inclement weather, thereby reducing delays on that system. Several panelists also suggested that increasing public sector investment in technologies—known as Intelligent Transportation Systems (ITS)—that are designed to enhance the safety, efficiency, and effectiveness of the transportation network, can serve as a way of increasing capacity and mobility without making major capital investments. DOT’s ITS program has two major areas of emphasis: (1) deploying and integrating intelligent infrastructure and (2) testing and evaluating intelligent vehicles. ITS includes technologies that improve traffic flow by adjusting signals, facilitating traffic flow at toll plazas, alerting emergency management services to the locations of crashes, increasing the efficiency of transit fare payment systems, and other actions. Appendix IV describes the different systems that are part of DOT’s ITS program. Other technological improvements suggested by panelists included increasing information available to users of the transportation system to help people avoid congested areas and to improve customer satisfaction with the system. For example, up-to-the-minute traffic updates posted on electronic road signs or over the Internet help give drivers the information necessary to make choices about when and where to travel. It was suggested that the federal government could play a key role in facilitating the development and sharing of such innovations through training programs and research centers, such as the National Cooperative Highway Research Program, the Transit Cooperative Research Program, and possible similar programs for waterborne transportation. However, panelists cautioned that the federal government might need to deal with some barriers to investing in technology development and implementation. One panelist said that there are few incentives for agencies to take risks on new technologies. If an agency improves its efficiency, it may result in the agency receiving reduced funding rather than being able to reinvest the savings. Finally, another approach to reducing congestion without making major capital investments is to use demand management techniques to reduce the number of vehicles traveling at the most congested times and on the most congested routes. For public roads, demand management generally means reducing the number of cars traveling on particularly congested routes toward downtown during the morning commuting period and away from downtown during the late afternoon commuting period. One panelist, in a book that he authored, said that “the most effective means of reducing peak-hour congestion would be to persuade solo drivers to share vehicles.” One type of demand management for travel on public roads is to make greater use of pricing incentives. In particular, many economists have proposed using congestion pricing that involves charging surcharges or tolls to drivers who choose to travel during peak periods when their use of the roads increases congestion. Economists generally believe that such surcharges or tolls enhance economic efficiency by making drivers take into account the external costs they impose on others in deciding when and where to drive. These costs include congestion, as well as pollution and other external effects. The goal of congestion pricing would be to charge a toll for travel during congested periods that would make the cost (including the toll) that a driver pays for such a trip equal or close to the total cost of that trip, including external costs. These surcharges could help reduce congestion by providing incentives for travelers to share rides, use transit, travel at less congested (generally off-peak) times and on less congested routes, or make other adjustments—and at the same time, generate more revenues that can be targeted to alleviating congestion in those specific corridors. According to a report issued by the Transportation Research Board, technologies that are currently used at some toll facilities to automatically charge users could also be used to electronically collect congestion surcharges without establishing additional toll booths that would cause delays. Peak-period pricing also has applicability for other modes of transportation. Amtrak and some transit systems use peak-period pricing, which gives travelers incentives to make their trips at less congested times. In addition to pricing incentives, other demand management techniques that encourage ride-sharing can be useful in reducing congestion. Ride- sharing can be encouraged by establishing carpool and vanpool staging areas, providing free or preferred parking for carpools and vanpools, subsidizing transit fares, and designating certain highway lanes as high occupancy vehicle (HOV) lanes that can only be used by vehicles with a specified number of people in them (two or more). HOV lanes can provide an incentive for sharing rides because they reduce the travel time for a group traveling together relative to the time required to travel alone. This incentive is likely to be particularly strong when the regular lanes are heavily congested. Several panelists also recommended use of high occupancy toll (HOT) lanes, which combine pricing techniques with the HOV concept. Experiments with HOT lanes, which allow lower occupancy vehicles or solo drivers to pay a fee to use HOV lanes during peak traffic periods, are currently taking place in California. HOT lanes can provide motorists with a choice: if they are in a hurry, they may elect to pay to have less delay and an improved level of service compared to the regular lanes. When HOT lanes run parallel to regular lanes, congestion in regular lanes may be reduced more than would be achieved by HOV lanes. Demand management techniques on roads, particularly those involving pricing, often provoke strong political opposition. Several panelists said that instituting charges to use roads that have been available “free” is particularly unpopular because many travelers believe that they have already paid for the roads through gasoline and other taxes and should not have to pay “twice.” Other concerns about congestion pricing include equity issues because of the potentially regressive nature of these charges (i.e., the surcharges constitute a larger portion of the earnings of lower income households and therefore impose a greater financial burden on them). In addition, some people find the concept of restricting lanes or roads to people who pay to use them to be elitist because that approach allows people who can afford to pay the tolls to avoid congestion that others must endure. Several of the panelists suggested that tolls might become more acceptable to the public if they were applied to new roads or lanes as a demonstration project so that the tolls’ effectiveness in reducing congestion and increasing commuter choices could be evaluated. Several panelists indicated that targeting the financing of transportation to achieving desired mobility outcomes, and addressing those segments of transportation systems that are most congested, would require more options for financing surface and maritime transportation projects than are currently available, and might also require more sources of revenue in the future. According to many panelists, the current system of financing surface and maritime transportation projects limits options for addressing mobility challenges. For example, several panelists said that separate funding for each mode at the federal, state, and local level can make it difficult to consider possible efficient and effective ways for enhancing mobility, and providing more flexibility in funding across modes could help address this limitation. In addition, some panelists argued that “earmarking” or designation by the Congress of federal funds for particular transportation projects bypasses traditional planning processes used to identify the highest priority projects, thus potentially limiting transportation agencies’ options for addressing the most severe mobility challenges. According to one panelist, bypassing transportation planning processes can also result in logical connections or interconnections between projects being overlooked. Several panelists acknowledged that the public sector could expand its financial support for alternative financing mechanisms to access new sources of capital and stimulate additional investment in surface and maritime transportation infrastructure. These mechanisms include both newly emerging and existing financing techniques such as providing credit assistance to state and local governments for capital projects and using tax policy to provide incentives to the private sector for investing in surface and maritime transportation infrastructure (see app. V for a description of alternative financing methods). The panelists emphasized, however, that these mechanisms currently provide only a small portion of the total funding that is needed for capital investment and are not, by themselves, a major strategy for addressing mobility challenges. Furthermore, they cautioned that some of these mechanisms, such as Grant Anticipation Revenue Vehicles, could create difficulties for state and local agencies to address future transportation problems, because agencies would be reliant on future federal revenues to repay the bonds. Many panelists stated that a possible future shortage of revenues presents a fundamental limitation to addressing mobility challenges. Some panelists said that, because of the increasing use of alternative fuels, revenues from the gas tax are expected to decrease in the future, possibly hindering the public sector’s ability to finance future transportation projects. In addition, one panelist explained that MPOs are required to produce financially constrained long-range plans, and the plans in the panelist’s organization indicate that future projections of revenue do not cover the rising costs of planned transportation projects. One method of raising revenue is for counties and other regional authorities to impose sales taxes for funding transportation projects. A number of counties have already passed such taxes and more are being considered nationwide. However, several panelists expressed concerns that this method might not be the best option for addressing mobility challenges. For example, one panelist stated that moving away from transportation user charges to sales taxes that are not directly tied to the use of transportation systems weakens the ties between transportation planning and finance. Counties and other authorities may be able to bypass traditional state and metropolitan planning processes because these sales taxes provide them with their own sources of funding for transportation. A number of panelists suggested increasing current federal fuel taxes to raise additional revenue for surface transportation projects. In contrast, other panelists argued that the federal gas tax could be reduced. They said that, under the current system, states are receiving most of the revenue raised by the federal gas tax within their state lines and therefore there is little need for the federal government to be involved in collecting this revenue, except for projects that affect more than one state or are of national significance. However, other panelists said that this might lead to a decrease in gas tax revenues available for transportation, because states may have incentives to use this revenue for purposes other than transportation or may not collect as much as is currently collected. Given that freight tonnage moved across all modes is expected to increase by 43 percent during the period from 1998 to 2010, new or increased taxes or other fees imposed on the freight sector could also help fund mobility improvements. For example, one panelist from the rail industry suggested modeling more projects on the Alameda Corridor in Los Angeles, where private rail freight carriers pay a fee to use infrastructure built with public financing. Another way to raise revenue for funding mobility improvements would be to increase taxes on freight trucking. According to FHWA, heavy trucks (weighing over 55,000 pounds) cause a disproportionate amount of damage to the nation’s highways and have not paid a corresponding share for the cost of pavement damage they cause. This situation will only be compounded by the large expected increases in freight tonnage moved by truck over the next 10 years. The Joint Committee on Taxation estimated that raising the ceiling on the tax paid by heavy vehicles to $1,900 could generate about $100 million per year. Another revenue raising strategy includes dedicating more of the revenues from taxes on alternative fuels, such as gasohol, to the Highway Trust Fund rather than to the U.S. Treasury’s General Fund, as currently happens. Finally, panelists also said that pricing strategies, mentioned earlier in this report as a tool to reduce congestion, are also possible additional sources of revenue for transportation purposes. We provided DOT, the Corps of Engineers, and Amtrak with draft copies of this report for their review and comment. We obtained oral comments from officials at DOT and the Corps of Engineers. These officials generally agreed with the report and provided technical comments that we incorporated as appropriate. In addition, officials from the Federal Railroad Administration within DOT commented that the report was timely and would be vital to the dialogue that occurs as the Congress considers the reauthorization of surface transportation legislation. Amtrak had no comments on the report. Our work was primarily performed at the headquarters of DOT and the Corps of Engineers (see app. VI for a detailed description of our scope and methodology). We conducted our work from September 2001 through August 2002 in accordance with generally accepted government auditing standards. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after the date of this report. At that time, we will send copies of this report to the congressional committees with responsibilities for surface and maritime transportation programs; DOT officials, including the Secretary of Transportation, the administrators of the Federal Highway Administration, Federal Railroad Administration, Federal Transit Administration, and Maritime Administration, the Director of the Bureau of Transportation Statistics, and the Commandant of the U.S. Coast Guard; the Commander and Chief of Engineers, U.S. Army Corps of Engineers; the President of Amtrak, and the Director of the Office of Management and Budget. We will make copies available to others on request. This report will also be available on our home page at no charge at http://www.gao.gov. If you have any questions about this report, please contact me at [email protected] or Kate Siggerud at [email protected]. Alternatively, we can be reached at (202) 512-2834. GAO contacts and acknowledgments are listed in appendix VII. Comparing the proportion of public spending devoted to various purposes across modes is difficult due to differences in the level of public sector involvement and in the definition of what constitutes capital versus operations and maintenance expenses in each mode. For example, the operation of public roads is essentially a function of private citizens operating their own vehicles, while operations for mass transit includes spending for bus drivers and subway operators, among other items. In addition, maintenance expenditures can differ greatly from one mode to another in their definition and scope. For example, maintenance for a public road involves activities such as patching, filling potholes, and fixing signage, while maintenance for channels and harbors involves routine dredging of built up sediment and disposal or storage of the dredged material. Given these significant differences in scope, different modes classify and report on maintenance expenses in different ways. For public roads, capital expenditures (which includes new construction, resurfacing, rehabilitation, restoration, and reconstruction of roads) constituted about one-half of total annual public sector expenditures over the last 10 years, with small increases in recent years. Of total capital expenditures in fiscal year 2000, 52 percent was used for system preservation, such as resurfacing and rehabilitation, while 40 percent was used for construction of new roads and bridges and other system expansions. These percentages have fluctuated somewhat throughout the 1990s. However, as shown in figure 8, the percentage of capital outlays spent on system preservation expenses increased from 45 percent to 52 percent between fiscal years 1993 and 2000, while construction of new roads and bridges and other system expansions declined from 49 percent to 40 percent over the same period. For transit, capital expenditures accounted for about 26 percent of total annual public sector expenditures in 1999. The federal government spends more heavily on capital than on operations for transit. The federal share of capital expenditures fluctuated throughout the 1990s but in fiscal year 2000 stood at about 50 percent, the same as it was in fiscal year 1991. The federal share of total operating expenses declined from about 5 percent in fiscal year 1991 to about 2 percent in fiscal year 2000. Federal government support to Amtrak for operating expenses and capital expenditures has fluctuated throughout the 1990s. Annual operating grants fluctuated between $300 and $600 million and capital grants between $300 and $500 million. In addition to these grants, the Taxpayer Relief Act of 1997 provided Amtrak with $2.2 billion for capital and operating purposes in fiscal years 1998 and 1999. Federal support declined in fiscal years 2000 and 2001, however, with the federal government providing grants to Amtrak of $571 and $521 million, respectively. For water transportation, spending by the U.S. Army Corps of Engineers (Corps of Engineers) for construction of locks and dams for inland waterway navigation fell while expenditures for operations and maintenance remained at around $350 to $400 million, as shown in figure 9. By contrast, Corps of Engineers expenditures for the construction, operations, and maintenance of federal channels and harbors have increased over the past decade. During fiscal years 1991 through 2000, construction expenditures increased from $112 million to $252 million (in 2000 dollars), while operations and maintenance expenditures increased from $631 million to $671 million (in 2000 dollars). In addition to the Corps of Engineers, the U.S. Coast Guard and the Maritime Administration also spend significant amounts for water transportation, although these agencies have limited responsibility for construction or maintenance of water transportation infrastructure. Demographic factors and economic growth are the primary variables influencing national travel projections for both passenger and freight travel. However, the key assumption underlying most of these travel projections is that the capacity of the transportation system is unconstrained; that is, capacity is assumed to expand as needed in order to accommodate future traffic flows. As a result, national travel projections need to be used carefully in evaluating how capacity improvements or increasing congestion in one mode of transportation might affect travel across other modes and the entire transportation system. Future travel growth will be influenced by demographic factors. A travel forecast study conducted for the Federal Highway Administration (FHWA) used economic and demographic variables such as per capita income and population to project a 24.7 percent national cumulative increase in vehicle miles traveled for passenger vehicles on public roads between 2000 and 2010. The study estimated that for every 1-percent increase in per capita income or population, vehicle miles traveled would increase nearly 1 percent. This forecast is unconstrained, however, in that it does not consider whether increased congestion or fiscal constraints will allow travel to grow at the rates projected. In part to deal with this limitation, FHWA uses another model to forecast a range of future vehicle miles traveled based on differing levels of investment. These projections recognize that if additional road capacity is provided, more travel is expected to occur than if the capacity additions are not provided. If congestion on a facility increases, some travelers will respond by shifting to alternate modes or routes, or will forgo some trips entirely. These projections are not available at this time but will be included in the U.S. Department of Transportation’s (DOT) 2002 report to Congress entitled Status of the Nation’s Highways, Bridges, and Transit: Conditions and Performance. While it is clear that travelers choose between modes of travel for reasons of convenience and cost, among other things, none of the FHWA travel forecasts consider the effects of changes in levels of travel on other modes, such as transit or rail. FHWA officials said that they would like to have a data system that projects intermodal travel, but for now such a system does not exist. The models also cannot reflect the impact of major shocks on the system, such as natural disasters or the terrorist attacks of September 2001. The Federal Transit Administration (FTA) makes national-level forecasts for growth in transit passenger miles traveled by collecting 15- to 25-year forecasts developed by metropolitan planning organizations (MPO) in the 33 largest metropolitan areas in the country. FTA calculates a national weighted average using the MPO forecasts and regional averages. MPOs create their forecasts as part of their long-range planning process. Unlike the first forecast for road travel discussed above, the 1999 Conditions and Performance report stated that the MPO forecasts for vehicle miles traveled and passenger miles traveled incorporate the effects of actions that the MPOs are proposing to shape demand in their areas to attain air quality and other developmental goals. The MPO plans may include transit expansion, congestion pricing, parking constraints, capacity limits, and other local policy options. MPO forecasts also have to consider funding availability. Amtrak provided us with systemwide forecasts of ridership, which are based on assumed annual economic growth of between 1 and 1.5 percent, fare increases equal to the national inflation rate, and projected ridership increases on particular routes, including new or changing service on certain routes scheduled to come on line over the forecast period. For short-distance routes, Amtrak uses a model that estimates total travel over a route by any mode, based on economic and demographic growth. The model then estimates travel on each mode competing in the corridor based on cost and service factors in each mode. For long distance routes, Amtrak uses a different model that projects future rail ridership using variables that have been determined to influence past rail ridership, such as population, employment, travel time for rail, and level of service for rail. This model does not consider conditions on other competing modes. In forecasting growth in national freight travel, models developed by FHWA and the U.S. Army Corps of Engineers (Corps of Engineers) use growth in trade and the economy as key factors driving freight travel. Projected growth in each particular mode is determined by growth in the production of the specific mix of commodities that historically are shipped on that mode. Therefore, any projected shift in freight movement from one mode to another is due to projected changes in the mix of commodities, or projected changes in where goods are produced and consumed. Because current or future conditions and the capacity of the freight transportation system cannot be factored into the national forecasts, a number of factors—including growing congestion, as well as the benefits of specific projects that might relieve congestion—are not considered in the projections. In addition, future trends in other factors that affect shippers’ choices of freight modes—such as relative cost, time, or reliability—are not easily quantifiable and are also linked to each system’s capacity and the congestion on each system. As such, these factors are not included in FHWA’s or Corps of Engineers’ national forecasting models. Underlying the commodity forecasts used by FHWA and the Corps of Engineers are a number of standard macro-economic assumptions concerning primarily supply side factors, such as changes in the size of the labor force and real growth in exports due to trade liberalization. Changes in border, airport, and seaport security since September 11 may affect assumptions that are imbedded in these commodity forecasts. For example, increased delays and inspections at the border or at a port may create problems for shippers to meet just-in-time requirements, possibly resulting in a short-term shift to an alternative mode, or a limiting of trade. Although current national freight forecasts are not capacity-constrained, FHWA is developing a “Freight Analysis Framework” to provide alternative analyses, assessing certain capacity limitations. The main impediment to developing this capability is determining capacity on each mode. There are commonly accepted measures of road capacity that are being incorporated, but rail and waterway capacity is not as easily measured. FHWA provided us with state-level forecasts of total vehicle miles traveled on public roads from 2000 to 2010, derived from data in the Highway Performance Monitoring System (HPMS) sample data set. This data set contains state-reported data on average annual daily traffic for approximately 113,000 road segments nationwide. For each sample section, HPMS includes measures of average annual daily traffic for the reporting year and estimates of future traffic for a specified forecast year, which is generally 18 to 25 years after the reporting year. It should be noted that the HPMS sample data do not include sections on any roads classified as local roads or rural minor collectors. Because the individual HPMS segment forecasts come from the states, we do not know exactly what models were used to develop them. According to officials at FHWA, the only national guidance comes from the HPMS Field Manual, which says that future average annual daily traffic should come from a technically supportable state procedure or data from MPOs or other local sources. The manual also says that HPMS forecasts for urbanized areas should be consistent with those developed by the MPO at the functional system and urbanized area level. For both local and intercity passenger travel, population growth is expected to be one of the key factors driving overall travel levels. Where that growth will occur will likely have a large effect on travel patterns and mode choices. According to the U.S. Census Bureau, the U.S. population will grow to almost 300 million by 2010. Although this represents a slower growth rate than in the past, it would still add approximately 18.4 million people to the 2000 population, and will likely also substantially increase the number of vehicles on public roads as well as the number of passengers on transit and intercity rail. The Census Bureau reported that since 1990, the greatest population growth has been in the South and West. According to one panelist, these regions’ metropolitan areas traditionally have lower central city densities and higher suburban densities than the Midwest and East. These areas are therefore harder to serve through transit than metropolitan areas with higher population densities, where transit can be more feasible. However, according to some transportation experts, it may not be possible to build new transit infrastructure in these areas due to environmental or other concerns. The population growth that is expected in suburban areas could lead to a larger increase in travel by private vehicles than by transit because suburban areas generally have lower population densities than inner cities, and also have more dispersed travel patterns, making them less easy to serve through conventional public transit. Although overall population growth will likely be greatest in suburban parts of metropolitan areas, high rates of growth are also predicted for rural areas. As is the case in suburbs, these rural areas are difficult to serve with anything but private automobiles because of low population densities and geographical dispersion of travel patterns, so travel by private vehicle may increase. Immigration patterns are also expected to contribute to changes in travel levels, but the extent will depend on immigration policies. For example, according to a senior researcher with the American Public Transportation Association, higher rates of immigration tend to increase transit use. In addition to overall population growth, another demographic trend that will likely affect mode choices is the aging of the population. According to data from the U.S. Census Bureau, the number of people aged 55 and over is projected to increase 26 percent between 2001 and 2010. The most rapidly growing broad age group is expected to be the population aged 85 and older, which is projected to increase 30 percent by 2010. According to the Federal Highway Administration and Federal Transit Administration’s 1999 Conditions and Performance report, the elderly have different mobility issues than the nonelderly because they are less likely to have drivers’ licenses, have more serious health problems, and may require special services and facilities. According to a report prepared for the World Business Council for Sustainable Development (Mobility 2001), cars driven by the elderly will constitute an increasing proportion of traffic, especially in the suburbs and rural areas, where many elderly people tend to reside. Increases in the number of older drivers can pose safety problems, in that the elderly have a higher rate of crashes per mile driven than younger drivers, and that rate rises significantly after age 85. The Mobility 2001 report also says that the driver fatality rate of drivers over 75 years of age is higher than any other age group except teenagers. Growth of the elderly population may therefore increase the importance of providing demand-responsive transit services and improving signs on public roads to make them clearer and more visible. Along with population growth, the increasing affluence of the U.S. population is expected to play a key role in local and intercity passenger travel levels and in the modes travelers choose. The 1999 Conditions and Performance report states that rates of vehicle ownership are lower in low- income households, leading those households to rely more on transit systems. According to Federal Transit Administration (FTA) officials and Mobility 2001, transit use—particularly use of buses—generally decreases as income increases. Increasing affluence also influences intercity travel levels. The 1999 Conditions and Performance report says that people with high incomes take approximately 30 percent more trips than people with low incomes, and the trips tend to be longer. Long-distance travel for business and recreation increases with income. Also, as income increases, travel by faster modes, such as car and air, increases, and travel by intercity bus tends to decrease. Several participants in our surface and maritime transportation panels (see app. VI) also indicated that improvements in communication technology will likely affect the amount and mode of intercity travel, but the direction and extent of the effect is uncertain. One panelist said that there is no additional cost to communicating over greater distances, so communications will replace travel to some extent, particularly as technologies improve. However, two other panelists said that communication technology might increase travel by making the benefit of travel more certain. For example, the Internet can provide people with current and extensive information about vacation destinations, potentially increasing the desire to travel. According to Mobility 2001, it is unclear whether telecommunications technology will substitute for the physical transportation of people and goods. Telecommuting and teleconferencing are becoming more common, but technological improvements would have to be significant before they can substitute for actual presence at work or in face-to-face meetings. In addition, while home-based workers do not have to commute, they tend to travel approximately the same amount as traditional workers, but differ in how their travel is distributed among trip purposes. The terrorist attacks on the United States on September 11, 2001, are expected to have some effect on passenger travel levels and choices about which mode to use, but U.S. Department of Transportation (DOT) officials and participants in the panels did not believe the long-term changes would be significant, provided that no more attacks occur. Federal Highway Administration and Federal Railroad Administration officials speculated that increased delays in air travel due to stricter security procedures might shift some travel from air to other modes, such as car or rail, although they expected this effect to be negligible in the long term unless additional incidents occur. Finally, changes in the price (or perceived price), condition, and reliability of one modal choice as compared with another are also likely to affect levels of travel and mode choices. For example, changes in the petroleum market that affect fuel prices, or changes in government policy that affect the cost of driving or transit prices, could result in shifts between personal vehicles and transit; however, it is difficult to predict the extent to which these changes will occur. According to Mobility 2001, automobiles offer greater flexibility in schedule and choice of destinations than other modes of transportation, and often also provide shorter travel times with lower out-of-pocket costs. However, if heavy and unpredictable road congestion causes large variations in automobile travel time, there could be a shift to transit or a decrease in overall travel. According to several reports by DOT and transportation research organizations, increasing international trade, economic growth, the increasing value of cargo shipped, and changes in policies affecting certain commodities are expected to influence future volumes of freight travel and the choice of mode by which freight is shipped. Increasing international trade and national trade policies are expected to affect commodity flows, volumes, and mode choice. According to the Transportation Statistics Annual Report 2000, the globalization of businesses can shift production of goods sold in the United States to locations outside of the country, increasing total ton-miles and changing the average length of haul of shipments. This shift in production could also affect freight mode choice, with more commodities being shipped by multiple modes as distances increase. According to Mobility 2001, truck transportation tends to be cheaper, faster, and more energy efficient than rail and barges for shipping high-value cargo. However, as distances increase, rail and intermodal transportation (linking rail and truck travel) become more cost-efficient options. Various trade policies also affect freight flows and volumes. For example, the North American Free Trade Agreement has contributed to the increased volume of trade moving on rail and highways. According to data from the Bureau of Transportation Statistics’ Transborder Surface Freight Database, between 1996 and 2000, tonnage of imports by rail from Mexico and Canada increased by about 25 percent, and imports by truck increased 20 percent. In the maritime sector, expanding trade with the Pacific Rim increased traffic at west coast container ports. According to the Transportation Statistics Annual Report 2000, economic growth results in a greater volume of goods produced and consumed, leading to more freight moved. As the economy grows, disposable income per capita increases and individual purchasing power rises, which can cause businesses to ship more freight per capita. According to the report, freight ton-miles per capita increased more than 30 percent, from 10,600 in 1975 to 14,000 in 1999. The increasing value of cargo and the continuing shift toward a more service-oriented economy and more time-sensitive shipments has affected the volume of freight shipments and the choice of modes on which freight is shipped. According to the Transportation Statistics Annual Report 2000, there is a continuing shift toward production of high-value, low- weight products, which leads to changes in freight travel levels and mode choice. For example, it takes more ton-miles to ship $1,000 worth of steel than it does to ship $1,000 worth of cell phones. High-value cargo, such as electronics and office equipment, tends to be shipped by air or truck, while rail and barges generally carry lower-value bulk items, such as coal and grain. According to Mobility 2001, the growth of e-commerce and just-in- time inventory practices depend upon the ability to deliver goods quickly and efficiently. A report prepared for the National Cooperative Highway Research Program states that the effects of just-in-time inventory practices are to increase the number of individual shipments, decrease their length of haul, and increase the importance of on-time delivery. Both reports indicate that such practices may shift some freight from slower modes, such as rail, to faster modes, such as truck or air. In addition, the Mobility 2001 report states that as the demand for specialized goods and services grows, the demand for smaller, more specialized trucks increases. Items ordered from catalogs or on-line retailers are often delivered by specialized trucks. Policies affecting particular commodities can have a large impact on the freight industry. For example, policies concerning greenhouse gas emissions can affect the amount of coal mined and shipped. Because coal is a primary good shipped by rail and water, reduction in coal mining would have a significant effect on tonnage for those modes. Changes in the type of coal mined as a result of environmental policies—such as an increase in mining of low-sulfur coal—can also affect the regional patterns of shipments, resulting in greater ton-miles of coal shipped. Also, increasing emissions controls and clean fuel requirements may raise the cost of operating trucks and result in a shift of freight from truck to rail or barge. For example, according to Mobility 2001, recently released rules from the Environmental Protection Agency implementing more stringent controls for emissions from heavy-duty vehicles are predicted to increase the purchase price of a truck by $803. Other environmental regulations also affect the cost of shipping freight, as when controls on the disposal of material dredged from navigation channels increase the costs of expanding those channels. Policies regarding cargo security may also affect the flow of goods into and out of the United States. For example, several of our panelists indicated that implementing stricter security measures will increase the cost of shipping freight as companies invest in the personnel and technology required. Tighter security measures could also increase time necessary to clear cargo through Customs or other inspection stations. The U.S. Department of Transportation’s (DOT) program of Intelligent Transportation Systems (ITS) offers technology-based systems intended to improve the safety, efficiency, and effectiveness of the surface transportation system. The ITS program applies proven and emerging technologies—drawn from computer hardware and software systems, telecommunications, navigation, and other systems—to surface transportation. DOT’s ITS program has two areas of emphasis: (1) deploying and integrating intelligent infrastructure and (2) testing and evaluating intelligent vehicles. Under the first area of emphasis, the intelligent infrastructure program is composed of the family of technologies that can enhance operations in three types of infrastructure: (1) infrastructure in metropolitan areas, (2) infrastructure in rural areas, and (3) commercial vehicles. Under the ITS program, DOT provides grants to states to support ITS activities. In practice, the Congress has designated the locations and amounts of funding for ITS. DOT solicits the specific projects to be funded and ensures that those projects meet criteria established in the Transportation Equity Act for the 21st Century. Metropolitan intelligent transportation systems focus on deployment and integration of technologies in urban and suburban geographic areas to improve mobility. These systems include: Arterial management systems that automate the process of adjusting signals to optimize traffic flow along arterial roadways; Freeway management systems that provide information to motorists and detect problems whose resolution will increase capacity and minimize congestion resulting from accidents; Transit management systems that enable new ways of monitoring and maintaining transit fleets to increase operational efficiencies through advanced vehicle locating devices, equipment monitoring systems, and fleet management; Incident management systems that enable authorities to identify and respond to vehicle crashes or breakdowns with the most appropriate and timely emergency services, thereby minimizing recovery times; Electronic toll collection systems that provide drivers and transportation agencies with convenient and reliable automated transactions to improve traffic flow at toll plazas and increase the operational efficiency of toll collection; Electronic fare payment systems that use electronic communication, data processing, and data storage techniques in the process of fare collection and in subsequent recordkeeping and funds transfer; Highway-rail intersection systems that coordinate traffic signal operations and train movement and notify drivers of approaching trains using in-vehicle warning systems; Emergency management systems that enhance coordination to ensure the nearest and most appropriate emergency service units respond to a crash; Regional multimodal traveler information systems that provide road and transit information to travelers to enhance the effectiveness of trip planning and en-route alternatives; Information management systems that provide for the archiving of data generated by ITS devices to support planning and operations; and Integrated systems that are designed to deliver the optimal mix of services in response to transportation system demands. Rural Intelligent Transportation Systems are designed to deploy high potential technologies in rural environments to satisfy the needs of a diverse population of users and operators. DOT has established seven categories of rural intelligent transportation projects. They are as follows: Surface Transportation Weather and Winter Mobility - technologies that alert drivers to hazardous conditions and dangers, including wide-area information dissemination of site-specific safety advisories and warnings; Emergency Services - systems that improve emergency response to serious crashes in rural areas, including technologies that automatically mobilize the closest police, ambulances, or fire fighters in cases of collisions of other emergencies; Statewide/Regional Traveler Information Infrastructure – system components that provide information to travelers who are unfamiliar with the local rural area and the operators of transportation services; Rural Crash Prevention – technologies and systems that are directed at preventing crashes before they occur, as well as reducing crash severity; Rural Transit Mobility – services designed to improve the efficiency of rural transit services and their accessibility to rural residents; Rural Traffic Management – services designed to identify and implement multi-jurisdictional coordination, mobile facilities, and simple solutions for small communities and operations in areas where utilities may not be available; and Highway Operations and Maintenance – systems designed to leverage technologies that improve the ability of highway workers to maintain and operate rural roads. The Commercial Vehicle ITS program focuses on applying technologies to improve the safety and productivity of commercial vehicles and drivers, reduce commercial vehicles’ operations costs, and facilitate regulatory processes for the trucking industry and government agencies. This is primarily accomplished through the Commercial Vehicle Information Systems and Networks—a program that links existing federal, state, and motor carrier information systems so that all entities can share information and communicate with each other in a more timely and accurate manner. The second area of emphasis in DOT’s ITS program—testing and evaluating intelligent vehicles—is designed to foster improvements in the safety and mobility of vehicles. This component of the ITS program is meant to promote traffic safety by expediting the commercial availability of advanced vehicle control and safety systems in four classes of vehicles: (1) light vehicles, including passenger cars, light trucks, vans, and sport utility vehicles; (2) commercial vehicles, including heavy trucks and interstate buses; (3) transit vehicles, including all nonrail vehicles operated by transit agencies; and (4) specialty vehicles, including those used for emergency response, law enforcement, and highway maintenance. Transportation officials at all levels of government recognize that funding from traditional sources (i.e., state revenues and federal aid) does not always keep pace with demands for new, expanded, or improved surface and maritime transportation infrastructure. Accordingly, the U.S. Department of Transportation (DOT) has supported a broad spectrum of emerging or established alternative financing mechanisms that can be used to augment traditional funding sources, access new sources of capital and operating funds, and enable transportation providers to proceed with major projects sooner than they might otherwise. These mechanisms fall into several broad categories: (1) allowing states to pay debt financing costs with future anticipated federal highway funds, (2) providing federal credit assistance, and (3) establishing financing institutions at the state level. In addition, state, local, and regional governments engage in public/private partnerships to tap private sector resources for investment in transportation capital projects. The federal government helps subsidize public/private partnerships by providing them with tax exemptions. The federal government allows states to tap into Federal-aid highway funds to repay debt-financing costs associated with highway projects through the use of Grant Anticipation Revenue Vehicles (GARVEE). Under this program, states can pledge a share of future obligations of federal highway funds toward repayment of bond-related expenses, including a portion of the principal and interest payments, insurance costs, and other costs. A project must be approved by DOT’s Federal Highway Administration to be eligible for this type of assistance. The federal government also provides credit assistance in the form of loans, loan guarantees, and lines of credit for a variety of surface and maritime transportation programs, as follows: Under the Transportation Infrastructure Finance and Innovation Act of 1998 (TIFIA), the federal government provides direct loans, loan guarantees, and lines of credit aimed at leveraging federal funds to attract nonfederal coinvestment in infrastructure improvements. This program is designed to provide financing for highway, mass transit, rail, airport, and intermodal projects, including expansions of multi-state highway trade corridors; major rehabilitation and replacement of transit vehicles, facilities, and equipment; border crossing infrastructure; and other investments with regional and national benefits. Under the Rail Rehabilitation and Improvement Financing Program (RRIF), established by the Transportation Equity Act for the 21st Century (TEA-21) in 1998, the federal government is authorized to provide direct loans and loan guarantees for railroad capital improvements. This type of credit assistance is made available to state and local governments, government-sponsored authorities, railroads, corporations, or joint ventures that include at least one railroad. However, as of June 2002, no loans or loan guarantees had been granted under this program. Under Title XI of the Merchant Marine Act of 1936, known as the Federal Ship Financing Guarantees Program, the federal government provides for a full faith and credit guarantee of debt obligations issued by (1) U.S. or foreign shipowners for the purpose of financing or refinancing U.S. or eligible export vessels that are constructed, reconstructed, or reconditioned in U.S. shipyards; and (2) U.S. shipyards for the purpose of financing advanced shipbuilding technology. A third way that the federal government helps transportation providers finance capital projects is by supporting State Infrastructure Banks (SIB). SIBs are investment funds established at the state or regional level that can make loans and provide other types of credit assistance to public and private transportation project sponsors. Under this program, the federal government allows states to use federal grants as “seed” funds to finance capital investments in highway and transit construction projects. The federal government currently supports SIBs in 39 states. In addition to these alternative financing mechanisms directly supported by the federal government, state, local, and regional governments sometimes engage in public/private partnerships to tap private sector resources for investment in transportation capital projects. The federal government also helps subsidize public/private partnerships by providing them with tax subsidies. One such subsidy is specifically targeted towards investment in ground transportation facilities—the tax exemption for interest earned on state and local bonds that are used to finance high-speed rail facilities and government-owned docks, wharves, and other facilities. In addition, a Department of the Treasury study indicates that the rates of tax depreciation allowed for railroads, railroad equipment, ships, and boats are likely to provide some subsidy to investors in those assets. Partnerships between state and local governments and the private sector are formed for the purpose of sharing the risks, financing costs, and benefits of transportation projects. Such partnerships can be used to minimize cost by improving project quality, maintaining risk-management, improving efficiency, spurring innovation, and accessing expertise that may not be available within the agency. These partnerships can take many forms; some examples include: Partnerships formed to develop, finance, build, and operate new toll roads and other roadways; Joint development of transit assets whereby land and facilities that are owned by transit agencies are sold or leased to private firms and the proceeds are used for capital investment in, and operations of, transit systems; “Turnkey” contracts for transit construction projects whereby the contractor (1) accepts a lower price for the delivered product if the project is delayed or (2) receives a higher profit if the project is delivered earlier or under budget; and Cross-border leases that permit foreign investors to own assets used in the United States, lease them to an American entity, and receive tax benefits under the laws of their home country. This financing mechanism offers an “up front” cost savings to transit agencies that are acquiring vehicles or other assets from a foreign firm. Our work covered major modes of surface and maritime transportation for passengers and freight, including public roads, public transit, railways, and ports and inland waterways. To determine trends in public expenditures for surface and maritime transportation over the past 10 years, we relied on U.S. Department of Transportation (DOT) reports and databases that document annual spending levels in each mode of transportation. We analyzed trends in total public sector and federal expenditures across modes during the 10-year period covering fiscal years 1991 through 2000, and we compared the proportion of public expenditures devoted to capital activities versus operating and maintaining the existing infrastructure during that same time period. We adjusted the expenditure data to account for inflation using separate indexes for expenditures made by the federal government and state and local governments. We used price indexes from the Department of Commerce’s Bureau of Economic Analysis’ National Income and Products Accounts. To determine projected levels of freight and passenger travel over the next 10 years, we identified projections made by DOT’s modal administrations, the U.S. Army Corps of Engineers, and Amtrak for the period covering calendar years 2001 through 2010. We interviewed officials responsible for the projections and reviewed available documentation to identify the methodology used in preparing the projections and the key factors driving them. We also obtained data on past levels of freight and passenger travel, covering fiscal years 1991 through 2000, from DOT’s modal administrations, the U.S. Army Corps of Engineers, and Amtrak. We analyzed the factors driving the trends for three types of travel—local, intercity, and freight— that have important distinctions in the types of vehicles and modes used for the travel. To identify mobility challenges and strategies for addressing those challenges, we primarily relied upon expert opinion, as well as a review of pertinent literature. In particular, we convened two panels of surface and maritime transportation experts to identify mobility issues and gather views about alternative strategies for addressing the issues and challenges to implementing those strategies. We contracted with the National Academy of Sciences (NAS) and its Transportation Research Board (TRB) to provide technical assistance in identifying and scheduling the two panels that were held on April 1 and 3, 2002. TRB officials selected a total of 22 panelists with input from us, including a cross-section of representatives from all surface and maritime modes and from various occupations involved in transportation planning. In keeping with NAS policy, the panelists were invited to provide their individual views and the panels were not designed to build consensus on any of the issues discussed. We analyzed the content of all of the comments made by the panelists to identify common themes about key mobility challenges and strategies for addressing those challenges. Where applicable, we also identified the opposing points of view about the challenges and strategies. The names and backgrounds of the panelists are as follows. We also note that two of the panelists served as moderators for the sessions, Dr. Joseph M. Sussman of the Massachusetts Institute of Technology and Dr. Damian J. Kulash of the Eno Foundation, Inc. Benjamin J. Allen is Interim Vice President for External Affairs and Distinguished Professor of Business at Iowa State University. Dr. Allen serves on the editorial boards of the Transportation Journal and Transport Logistics, and he is currently Chair of the Committee for the Study of Freight Capacity for the Next Century at TRB. His expertise includes transportation regulation, resource allocation, income distribution, and managerial decisionmaking and his research has been published in numerous transportation journals. Daniel Brand is Vice President of Charles River Associates, Inc., in Boston, Mass. Mr. Brand has served as Undersecretary of the Massachusetts Department of Transportation, Associate Professor of City Planning at Harvard University, and Senior Lecturer in the Massachusetts Institute of Technology’s Civil Engineering Department. Mr. Brand edited Urban Transportation Innovation, coedited Urban Travel Demand Forecasting, and is the author of numerous monographs and articles on transportation. Jon E. Burkhardt is the Senior Study Director at Westat, Inc., in Rockville, Md. His expertise is in the transit needs of rural and small urban areas, in particular, the needs of the elderly population in such areas. He has directed studies on the ways in which advanced technology can aid rural public transit systems, the mobility challenges for older persons, and the economic impacts of rural public transportation. Sarah C. Campbell is the President of TransManagement, Inc., in Washington, D.C., where she advises transportation agencies at all levels of government, nonprofit organizations, and private foundations on transportation issues. Ms. Campbell is currently a member of the Executive Committee of the TRB. She was a founding director of the Surface Transportation Policy Project and currently serves as chairman of its board of directors. Christina S. Casgar is the Executive Director of the Foundation for Intermodal Research and Education in Greenbelt, Md. Ms. Casgar’s expertise is in transportation and logistics policies of federal, state, and local levels of government, particularly in issues involving port authorities. She has also worked with the TRB as an industry investigator to identify key issues and areas of research regarding the motor carrier industry. Anthony Downs is a Senior Fellow at the Brookings Institution. Mr. Downs’s research interests are in the areas of democracy, demographics, housing, metropolitan policy, real estate, real estate finance, “smart growth,” suburban sprawl, and urban policy. He is the author of New Visions for Metropolitan America (1994), Stuck in Traffic: Coping with Peak-Hour Traffic Congestion (1992), and several policy briefs published by the Brookings Institution. Thomas R. Hickey served until recently as the General Manager of the Port Authority Transit Corporation in Lindenwold, N.J. Mr. Hickey has 23 years of public transit experience, and he is a nationally recognized authority in the field of passenger rail operations and the design of intermodal facilities. Ronald F. Kirby is the Director of Transportation Planning at the Metropolitan Washington Council of Governments. Dr. Kirby is responsible for conducting long-range planning of the highway and public transportation system in the Washington, D.C., region, assessing the air quality implications of transportation plans and programs, implementing a regional ridesharing program, and participating in airport systems planning in the region. Prior to joining the Council of Governments, he conducted transportation studies for the Urban Institute and the World Bank. Damian J. Kulash is the President and Chief Executive Officer of the Eno Transportation Foundation, Inc., in Washington, D.C. Dr. Kulash established a series of forums at the Foundation addressing major issues affecting all transportation modes including economic returns on transportation investment, coordination of intermodal freight operations in Europe and the United States, and development of a U.S. transportation strategy that is compatible with national global climate change objectives. He has published numerous articles in transportation journals and directed studies at the Congressional Budget Office and the TRB. Charles A. Lave is a Professor of Economics (Emeritus) at the University of California, Irvine where he served as Chair of the Economics Department. Dr. Lave has been a visiting scholar at the Massachusetts Institute of Technology and Harvard University, and he served on the Board of Directors of the National Bureau of Economic Research from 1991 through 1997. He has published numerous articles on transportation pricing and other topics. Stephen Lockwood is Vice President of Parsons Corporation, an international firm that provides transportation planning, design, construction, engineering, and project management services. Mr. Lockwood is also a consultant to the American Association of State Highway and Transportation Officials (AASHTO), the Federal Highway Administration (FHWA), and other transportation organizations. Prior to joining Parsons, he served as Associate Administrator for Policy at FHWA. Timothy J. Lomax is a Research Engineer at the Texas Transportation Institute at Texas A&M University. Dr. Lomax has published extensively on urban mobility issues and he developed a methodology used to assess congestion levels and costs in major cities throughout the United States. He is currently conducting research, funded by nine state transportation departments, to improve mobility measuring capabilities. James R. McCarville is the Executive Director of the Port of Pittsburgh Commission. He also serves as the President of the trade association, Inland Rivers’ Ports and Terminals, Inc., and is a member of the Marine Transportation System National Advisory Council, a group sponsored by the U.S. Secretary of Transportation. Mr. McCarville previously served as a consultant to the governments of Brazil, Uruguay, and Mexico on matters of port organization, operational efficiency, and privatization. James W. McClellan is Senior Vice President for Strategic Planning at the Norfolk Southern Corporation in Norfolk, Va., where he previously held positions in corporate planning and development. Prior to joining Norfolk Southern, he served in various marketing and planning positions with the New York Central Railroad, DOT’s Federal Railroad Administration, and the Association of American Railroads. Michael D. Meyer is a Professor in the School of Civil and Environmental Engineering at the Georgia Institute of Technology and was the Chair of the school from 1995 to 2000. He previously served as Director of Transportation Planning for the state of Massachusetts. Dr. Meyer’s expertise includes transportation planning, public works economics and finance, public policy analysis, and environmental impact assessments. He has written over 120 technical articles and has authored or co-authored numerous texts on transportation planning and policy. William W. Millar is President of the American Public Transportation Association (APTA). Prior to joining APTA, he was executive director of the Port Authority of Allegheny County in Pittsburgh, Pa. Mr. Millar is a nationally recognized leader in public transit and has served on or as Chair of the executive committees of TRB, the Transit Development Corporation, APTA, and the Pennsylvania Association of Municipal Transportation Authorities. Alan E. Pisarski is an independent transportation consultant in Falls Church, Va., providing services to public and private sector clients in the United States and abroad in the areas of transport policy, travel behavior, and data analysis and development. He has served as an advisor to numerous transportation and statistics agencies and transportation trade associations. He has also conducted surface transportation reviews for AASHTO and FHWA. Craig E. Philip is President and Chief Executive Officer of the Ingram Barge Company in Nashville, Tenn. He has served in various professional and senior management capacities in the maritime, rail, and intermodal industries and has held adjunct faculty positions at Princeton University and Vanderbilt University. Dr. Philip serves on the Executive Committee of the American Waterways Operators Association, the Marine Transportation System National Advisory Council, and the National Academy of Sciences’ Marine Board, and he is immediate past Chairman of the National Waterways Conference. Arlee T. Reno is a consultant with Cambridge Systematics in Washington, D.C. Mr. Reno has expertise in performance-based planning and measurement, multimodal investment analysis, urban transportation costs, alternative tax sources, and revenue forecasting for highway agencies. He has conducted reviews for the FHWA, AASHTO, and numerous state transportation agencies. Joseph M. Sussman is the JR East Professor in the Department of Civil and Environmental Engineering and the Engineering Systems Division at the Massachusetts Institute of Technology. Dr. Sussman is the author of Introduction to Transportation Systems (2000) and specializes in transportation systems and institutions, regional strategic transportation planning, intercity freight and passenger rail, intelligent transportation systems, simulation and risk assessment methods, and complex systems and he has authored numerous publications in those areas. He has served as Chair of TRB committees and as the Chairman of its Executive Committee in 1994, and he serves on the Board of Directors of ITS America and ITS Massachusetts. Louis S. Thompson is a Railways Advisor for the World Bank where he consults on all of the Bank’s railway lending activities. Prior to joining the Bank, Mr. Thompson held a number of senior positions in DOT’s Federal Railroad Administration, including Acting Associate Administrator for Policy, Associate Administrator for Passenger and Freight Services, Associate Administrator for Intercity Services, and Director of the Northeast Corridor Improvement Project. He has also served as an economics and engineering consultant. Martin Wachs is the Director of the Institute of Transportation Studies at the University of California, Berkeley and he holds faculty appointments in the departments of City and Regional Planning and Civil and Environmental Engineering at the university. Dr. Wachs has published extensively in the areas of transportation planning and policy, especially as related to elderly populations, fare and subsidy policies, crime in public transit, ethics, and forecasting. He currently serves as Chairman of the TRB and has served on various transportation committees for the state of California. In addition to the above, Christine Bonham, Jay Cherlow, Helen DeSaulniers, Colin Fallon, Rita Grieco, Brandon Haller, David Hooper, Jessica Lucas, Sara Ann Moessbauer, Jobenia Odum, and Andrew Von Ah of GAO, as well as the experts identified in appendix VI, made key contributions to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading. | The U.S. surface and maritime transportation systems include roads, mass transit systems, railroads, and ports and waterways. One of the major goals of these systems is to provide and enhance mobility, that is, the free flow of passengers and goods. Mobility provides people with access to goods, services, recreation, and jobs; provides businesses with access to materials, markets and people; and promotes the movement of personnel and material to meet national defense needs. During the past decade, total public sector spending increased for public roads and transit, remained constant for waterways, and decreased for rail. Passenger and freight travel are expected to increase over the next 10 years, according to Department of Transportation projections. Passenger vehicle travel on public roads is expected to grow by 24.7 percent from 2000 to 2010. Passenger travel on transit systems is expected to increase by 17.2 percent over the same period. Amtrak has estimated that intercity passenger rail ridership will increase by 25.9 percent from 2001 to 2010. The key factors behind increases in passenger travel, and the modes travelers choose, are expected to be population growth, the aging of the population, and rising affluence. According to GAO's expert panelists and other sources, with increasing passenger and freight travel, the surface and maritime transportation systems face a number of challenges that involve ensuring continued mobility while maintaining a balance with other social goals, such as environmental preservation. These challenges include (1) preventing congestion from overwhelming the transportation system, (2) ensuring access to transportation for certain undeserved populations, and (3) addressing the transportation system's negative effects on the environment and communities. There is no one solution for the mobility challenges facing the nation, and GAO's expert panelists indicated that numerous approaches are needed to address these challenges. Strategies included are to (1) focus on the entire surface and maritime transportation system rather than on specific modes and types of travel, (2) use a full range of tools to achieve desired mobility outcomes, and (3) provide more options for financing mobility improvements and consider additional sources of revenue. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The 1986 Compact of Free Association between the United States, the FSM, and the RMI provided a framework for the United States to work toward achieving its three main goals: (1) to secure self-government for the FSM and the RMI, (2) to assist the FSM and the RMI in their efforts to advance economic development and self-sufficiency, and (3) to ensure certain national security rights for all of the parties. The first goal has been met. The FSM and the RMI are independent nations and are members of international organizations such as the United Nations. The second goal of the Compact–advancing economic development and self-sufficiency for both countries–was to be accomplished primarily through U.S. direct financial payments (to be disbursed and monitored by the U.S. Department of the Interior) to the FSM and the RMI. For 1987 through 2003, U.S. assistance to the FSM and the RMI to support economic development is estimated, on the basis of Interior data, to be about $2.1 billion. Economic self-sufficiency has not been achieved. Although total U.S. assistance (Compact direct funding as well as U.S. programs and services) as a percentage of total government revenue has fallen in both countries (particularly in the FSM), the two nations remain highly dependent on U.S. funds. U.S. direct assistance has maintained standards of living that are higher than could be achieved in the absence of U.S. support. Further, the U.S., FSM, and RMI governments provided little accountability over Compact expenditures. The third goal of the Compact–securing national security rights for all parties–has been achieved. The Compact obligates the United States to defend the FSM and the RMI against an attack or the threat of attack in the same way it would defend its own citizens. The Compact also provides the United States with the right of “strategic denial,” the ability to prevent access to the islands and their territorial waters by the military personnel of other countries or the use of the islands for military purposes. In addition, the Compact grants the United States a “defense veto.” Finally, through a Compact-related agreement, the United States secured continued access to military facilities on Kwajalein Atoll in the RMI through 2016. In a previous report, we identified Kwajalein Atoll as the key U.S. defense interest in the two countries. Of these rights, only the defense veto is due to expire in 2003 if not renewed. Another aspect of the special relationship between the FSM and the RMI and the United States involves the unique immigration rights that the Compact grants. Through the original Compact, citizens of both nations are allowed to live and work in the United States as “nonimmigrants” and can stay for long periods of time, with few restrictions. Further, the Compact exempted FSM and RMI citizens from meeting U.S. passport, visa, and labor certification requirements when entering the United States. In recognition of the potential adverse impacts that Hawaii and nearby U.S. commonwealths and territories could face as a result of an influx of FSM and RMI citizens, the Congress authorized Compact impact payments to address the financial impact of these nonimmigrants on Guam, Hawaii, and the Commonwealth of the Northern Mariana Islands (CNMI). By 1998, more than 13,000 FSM and RMI citizens had made use of the Compact immigration provisions and were living in the three areas. The governments of the three locations have provided the U.S. government with annual Compact nonimmigrant impact estimates; for example, in 2000 the total estimated impact for the three areas was $58.2 million. In that year, Guam received $7.58 million in impact funding, while the other two areas received no funding. In the fall of 1999, the United States and the two Pacific Island nations began negotiating economic assistance and defense provisions of the Compact that were due to expire. Immigration issues were also addressed. According to the Department of State, the aims of the amended Compacts are to (1) continue economic assistance to advance self-reliance, while improving accountability and effectiveness; (2) continue the defense relationship, including a 50-year lease extension (beyond 2016) of U.S. military access to Kwajalein Atoll in the RMI; (3) strengthen immigration provisions; and (4) provide assistance to lessen the impact of Micronesian migration on Hawaii, Guam, and the CNMI. Under the amended Compacts with the FSM and the RMI, new congressional authorizations of approximately $3.5 billion in funding would be required over the next 20 years, with a total possible authorization through 2086 of $6.6 billion. Economic assistance would be provided to the two countries for 20 years–from 2004 through 2023–with all subsequent funding directed to the RMI for continued U.S. access to military facilities in that country. Under the U.S. proposals, annual grant amounts to each country would be reduced each year in order to encourage budgetary self-reliance and transition the countries from receiving annual U.S. grant funding to receiving annual trust fund earnings. This decrease in grant funding, combined with FSM and RMI population growth, would also result in falling per capita grant assistance over the funding period–particularly for the RMI. If the trust funds established in the amended Compacts earn a 6 percent rate of return, the FSM trust fund would be insufficient to replace expiring annual grants. The RMI trust fund would replace grants in fiscal year 2024 but would become insufficient for this purpose by fiscal year 2040. Under the amended Compacts with the FSM and the RMI, new congressional authorizations of approximately $6.6 billion could be required for U.S. payments from fiscal years 2004 to 2086, of which $3.5 billion would be required for the first 20 years of the Compacts (see table 1). The share of new authorizations to the FSM would be about $2.3 billion and would end after fiscal year 2023. The share of new authorizations to the RMI would be about $1.2 billion for the first 20 years, with about $300 million related to extending U.S. military access to Kwajalein Atoll through 2023. Further funding of $3.1 billion for the remainder of the period corresponds to extended grants to Kwajalein and payments related to U.S. military use of land at Kwajalein Atoll. The cost of this $6.6 billion new authorization, expressed in fiscal year 2004 U.S. dollars, would be $3.8 billion. This new authorized funding would be provided to each country in the form of (1) annual grant funds targeted to priority areas (such as health, education, and infrastructure); (2) contributions to a trust fund for each country such that trust fund earnings would become available to the FSM and the RMI in fiscal year 2024 to replace expiring annual grants; (3) payments the U.S. government makes to the RMI government that the RMI transfers to Kwajalein landowners to compensate them for the U.S. use of their lands for defense sites; and (4) an extension of federal services that have been provided under the original Compact but are due to expire in fiscal year 2003. Under the U.S. proposals, annual grant amounts to each country would be reduced each year in order to encourage budgetary self-reliance and transition the countries from receiving annual U.S. grant funding to receiving annual trust fund earnings. Thus, the amended Compacts increase annual U.S. contributions to the trust funds each year by the grant reduction amount. This decrease in grant funding, combined with FSM and RMI population growth, would also result in falling per capita grant assistance over the funding period–particularly for the RMI (see fig. 1). Using published U.S. Census population growth rate projections for the two countries, the real value of grants per capita to the FSM would begin at an estimated $687 in fiscal year 2004 and would further decrease over the course of the Compact to $476 in fiscal year 2023. The real value of grants per capita to the RMI would begin at an estimated $627 in fiscal year 2004 and would further decrease to an estimated $303 in fiscal year 2023. The reduction in real per capita funding over the next 20 years is a continuation of the decreasing amount of available grant funds (in real terms) that the FSM and the RMI had during the 17 years of prior Compact assistance. The decline in annual grant assistance could impact FSM and RMI government budget and service provision, employment prospects, migration, and the overall gross domestic product (GDP) outlook, though the immediate effect is likely to differ between the two countries. For example, the FSM is likely to experience fiscal pressures in 2004, when the value of Compact grant assistance drops in real terms by 8 percent relative to the 2001 level (a reduction equal to 3 percent of GDP). For the RMI, however, the proposed level of Compact grant assistance in 2004 would actually be 8 percent higher in real terms than the 2001 level (an increase equal to 3 percent of GDP). According to the RMI, this increase would likely be allocated largely to the infrastructure investment budget and would provide a substantial stimulus to the economy in the first years of the new Compact. The amended Compacts were designed to build trust funds that, beginning in fiscal year 2024, yield annual earnings to replace grant assistance that ends in 2023. Both the FSM and the RMI are required to provide an initial contribution to their respective trust funds of $30 million. In designing the trust funds, the Department of State assumed that the trust fund would earn a 6 percent rate of return. The amended Compacts do not address whether trust fund earnings should be sufficient to cover expiring federal services, but they do create a structure that sets aside earnings above 6 percent, should they occur, that could act as a buffer against years with low or negative trust fund returns. Importantly, whether the estimated value of the proposed trust funds would be sufficient to replace grants or create a buffer account would depend on the rate of return that is realized. If the trust funds earn a 6 percent rate of return, then the FSM trust fund would yield a return of $57 million in fiscal year 2023, an amount insufficient to replace expiring grants by an estimated value of $27 million. The RMI trust fund would yield a return of $33 million in fiscal year 2023, an estimated $5 million above the amount required to replace grants in fiscal year 2024. Nevertheless, the RMI trust fund would become insufficient for replacing grant funding by fiscal year 2040. If the trust funds are comprised of both stocks (60 percent of the portfolio) and long-term government bonds (40 percent of the portfolio) such that the forecasted average return is around 7.9 percent, then both trust funds would yield returns sufficient to replace expiring grants and to create a buffer account. However, while the RMI trust fund should continue to grow in perpetuity, the FSM trust fund would eventually deplete the buffer account and fail to replace grant funding by fiscal year 2048. I will now discuss provisions in the amended Compacts designed to provide improved accountability over, and effectiveness of, U.S. assistance. This is an area where we have offered several recommendations in past years, as we have found accountability over past assistance to be lacking. In sum, most of our recommendations regarding future Compact assistance have been addressed with the introduction of strengthened accountability measures in the signed amended Compacts and related agreements. I must emphasize, however, that the extent to which these provisions will ultimately provide increased accountability over, and effectiveness of, future U.S. assistance will depend upon how diligently the provisions are implemented and monitored by all governments. The following summary describes key accountability measures included in the amended Compacts and related agreements: The amended Compacts would require that grants be targeted to priority areas such as health, education, the environment, and public infrastructure. In both countries, 5 percent of the amount dedicated to infrastructure, combined with a matching amount from the island governments, would be placed in an infrastructure maintenance fund. Compact-related agreements with both countries (the so-called “fiscal procedures agreements”) would establish a joint economic management committee for the FSM and the RMI that would meet at least once annually. The duties of the committees would include (1) reviewing planning documents and evaluating island government progress to foster economic advancement and budgetary self-reliance; (2) consulting with program and service providers and other bilateral and multilateral partners to coordinate or monitor the use of development assistance; (3) reviewing audits; (4) reviewing performance outcomes in relation to the previous year’s grant funding level, terms, and conditions; and (5) reviewing and approving grant allocations (which would be binding) and performance objectives for the upcoming year. Further, the fiscal procedures agreements would give the United States control over the annual review process: The United States would appoint three government members to each committee, including the chairman, while the FSM or the RMI would appoint two government members. Grant conditions normally applicable to U.S. state and local governments would apply to each grant. General terms and conditions for the grants would include conformance to plans, strategies, budgets, project specifications, architectural and engineering specifications, and performance standards. Other special conditions or restrictions could be attached to grants as necessary. The United States could withhold payments if either country fails to comply with grant terms and conditions. In addition, funds could be withheld if the FSM or RMI governments do not cooperate in U.S. investigations regarding whether Compact funds have been used for purposes other than those set forth in the amended Compacts. The fiscal procedures agreements would require numerous reporting requirements for the two countries. For example, each country must prepare strategic planning documents that are updated regularly, annual budgets that propose sector expenditures and performance measures, annual reports to the U.S. President regarding the use of assistance, quarterly and annual financial reports, and quarterly grant performance reports. The amended Compacts’ trust fund management agreements would grant the U.S. government control over trust fund management: The United States would appoint three members, including the chairman, to a committee to administer the trust funds, while the FSM or the RMI would appoint two members. After the initial 20 years, the trust fund committee would remain the same, unless otherwise agreed by the original parties. The fiscal procedures agreements would require the joint economic management committees to consult with program providers in order to coordinate future U.S. assistance. However, we have seen no evidence demonstrating that an overall assessment of the appropriateness, effectiveness, and oversight of U.S. programs has been conducted, as we recommended. The successful implementation of the many new accountability provisions will require a sustained commitment by the three governments to fulfill their new roles and responsibilities. Appropriate resources from the United States, the FSM, and the RMI represent one form of this commitment. While the amended Compacts do not address staffing issues, officials from Interior’s Office of Insular Affairs have informed us that their office intends to post six staff in a new Honolulu office. Further, an Interior official noted that his office has brought one new staff on board in Washington, D.C., and intends to post one person to work in the RMI (one staff is already resident in the FSM). We have not conducted an assessment of Interior’s staffing plan and rationale and cannot comment on the adequacy of the plan or whether it represents sufficient resources in the right location. The most significant defense-related change in the amended Compacts is the extension of U.S. military access to Kwajalein Atoll in the RMI. While the U.S. government had already secured access to Kwajalein until 2016 through the 1986 MUORA, the newly revised MUORA would grant the United States access until 2066, with an option to extend for an additional 20 years to 2086. According to a Department of Defense (DOD) official, recent DOD assessments have envisioned that access to Kwajalein would be needed well beyond 2016. He stated that DOD has not undertaken any further review of the topic, and none is currently planned. This official also stated that, given the high priority accorded to missile defense programs and to enhancing space operations and capabilities by the current administration, and the inability to project the likely improvement in key technologies beyond 2023, the need to extend the MUORA beyond 2016 is persuasive. He also emphasized that the U.S. government has flexibility in that it can end its use of Kwajalein Atoll any time after 2023 by giving advance notice of 7 years and making a termination payment. We have estimated that the total cost of this extension would be $3.4 billion (to cover years 2017 through 2086). The majority of this funding ($2.3 billion) would be provided by the RMI government to Kwajalein Atoll landowners, while the remainder ($1.1 billion) would be used for development and impact on Kwajalein Atoll. According to a State Department official, there are approximately 80 landowners. Four landowners receive one-third of the annual payment, which is based on acreage owned. This landowner funding (along with all other Kwajalein- related funds) through 2023 would not be provided by DOD but would instead continue as an Interior appropriation. Departmental responsibility for authorization and appropriation for Kwajalein-related funding beyond 2023 has not been determined according to the Department of State. Of note, the Kwajalein Atoll landowners have not yet agreed to sign an amended land-use agreement with the RMI government to extend U.S. access to Kwajalein beyond 2016 at the funding levels established in the amended Compact. While the original Compact’s immigration provisions are not expiring, the Department of State targeted them as requiring changes. The amended Compacts would strengthen the immigration provisions of the Compact by adding new restrictions and expressly applying the provisions of the Immigration and Nationality Act of 1952, as amended (P.L. 82-414) to Compact nonimmigrants. There are several new immigration provisions in the amended Compacts that differ from those contained in the original Compact. For example, Compact nonimmigrants would now be required to carry a valid passport in order to be admitted into the United States. Further, children coming to the United States for the purpose of adoption would not be admissible under the amended Compacts. Instead, these children would have to apply for admission to the United States under the general immigration requirements for adopted children. In addition, the Attorney General would have the authority to issue regulations that specify the time and conditions of a Compact nonimmigrant’s admission into the United States (under the original Compact, regulations could be promulgated to establish limitations on Compact nonimmigrants in U.S. territories or possessions). In addition, the implementing legislation for the amended Compacts would provide $15 million annually for U.S. locations that experience costs associated with Compact nonimmigrants. This amount would not be adjusted for inflation, would be in effect for fiscal years 2004 through 2023, and would total $300 million. Allocation of these funds between locations such as Hawaii, Guam, and the CNMI would be based on the number of qualified nonimmigrants in each location. Mr. Chairman and Members of the Committee, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For future contacts regarding this testimony, please call Susan S. Westin or Emil Friberg, Jr., at (202) 512-4128. Individuals making key contributions to this testimony included Leslie Holen, Kendall Schaefer, Mary Moutsos, and Rona Mendelsohn. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In 1986, the United States entered into a Compact of Free Association with the Pacific Island nations of the Federated States of Micronesia, or FSM, and the Republic of the Marshall Islands, or RMI. The Compact provided about $2.1 billion in U.S. funds, supplied by the Department of the Interior, over 17 years (1987-2003) to the FSM and the RMI. These funds were intended to advance economic development. In a past report, GAO found that this assistance did little to advance economic development in either country, and accountability over funding was limited. The Compact also established U.S. defense rights and obligations in the region and allowed for migration from both countries to the United States. The three parties recently renegotiated expiring economic assistance provisions of the Compact in order to provide an additional 20 years of assistance (2004-2023). In addition, the negotiations addressed defense and immigration issues. The House International Relations and Resources Committees requested that GAO report on Compact negotiations. The amended Compacts of Free Association between the United States and the FSM and the RMI to renew expiring U.S. assistance could potentially cost the U.S. government about $6.6 billion in new authorizations from the Congress. Of this amount, $3.5 billion would cover payments over a 20-year period (2004-2023), while $3.1 billion represents payments for U.S. military access to Kwajalein Atoll in the RMI for the years 2024 through 2086. While the level of annual grant assistance to both countries would decrease each year, contributions to trust funds--meant to eventually replace grant funding--would increase annually by a comparable amount. Nevertheless, at an assumed annual 6 percent rate of return, earnings from the FSM trust fund would be unable to replace expiring grant assistance in 2024, while earnings from the RMI trust fund would encounter the same problem by 2040. The amended Compacts strengthen reporting and monitoring measures that could improve accountability over assistance, if diligently implemented. These measures include the following: assistance grants would be targeted to priority areas such as health and education; annual reporting and consultation requirements would be expanded; and funds could be withheld for noncompliance with grant terms and conditions. The successful implementation of the many new accountability provisions will require appropriate resources and sustained commitment from the United States, the FSM, and the RMI. Regarding defense, U.S. military access to Kwajalein Atoll in the RMI would be extended from 2016 through 2066, with an option to extend through 2086. Finally, Compact provisions addressing immigration have been strengthened. For example, FSM and RMI citizens entering the United States would need to carry a passport, and the U.S. Attorney General could, through regulations, specify the time and conditions of admission to the United States for these citizens. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
NQF is a nonprofit organization established in 1999 in order to foster agreement, or consensus, on national standards for measuring and public reporting of health care performance data.more than 400 organizations that represent multiple sectors of the health care system, including providers, consumers, and researchers. NQF’s mission focuses on three core areas: (1) building consensus on national priorities and goals for performance improvement and working in partnership to achieve them, (2) endorsing national consensus standards for measuring and publicly reporting on performance, and (3) promoting the attainment of national goals through education and outreach programs. Its membership includes Prior to its contract with HHS, NQF established a consensus development process (CDP) to evaluate available health care quality measures to determine which ones are qualified to be endorsed—that is, recognized— as national standards. Under this process, organizations that develop quality measures submit them to NQF for consideration, in response to specific solicitations by NQF. NQF forms a committee of experts from its member organizations as well as other organizations and agencies to conduct an objective and transparent review of these quality measures against four standardized criteria established by NQF, such as whether the measures are scientifically acceptable. After this committee evaluates the measures against these criteria, NQF’s process allows for a period during which its member organizations and the public may comment on the committee’s recommendation for each measure. The process also provides for a period for its member organizations to vote on whether the measures should be endorsed by NQF as a national standard. Ultimately, NQF’s board of directors makes a final decision on whether NQF should formally endorse the measures. As of October 2011, NQF has endorsed over 600 health care quality measures in 27 areas, such as cancer and diabetes. HHS uses NQF- endorsed measures in its programs and initiatives to promote quality measurement, and NQF continues to endorse quality measures separate from its contract with HHS. NQF’s work under the contract includes endorsement of quality measures and other activities that are expected to support HHS’s quality measurement efforts, such as through value-based purchasing programs. Specifically, NQF’s work under the contract consists of various projects under the nine contract activities related to health care quality measurement. The work plans developed annually to respond to MIPPA and NQF’s technical proposal to respond to PPACA delineate the projects NQF is required to conduct under the nine contract activities, as well as expected time frames and cost estimates for the projects for each year. Table 1 provides more detailed information on the nine contract activities. Some of these activities are required by either MIPPA or PPACA, while others are quality measurement activities established by HHS or administrative activities. To help determine the activities and the projects under the nine contract activities that NQF is expected to perform during each contract year, HHS has established an interagency workgroup that comprises officials from multiple divisions within HHS, including the Agency for Healthcare Research and Quality, the Centers for Medicare & Medicaid Services (CMS), and the Office of the National Coordinator for Health Information Technology. The workgroup is responsible for prioritizing and selecting the activities and projects under each activity that NQF is expected to perform during each contract year. HHS officials told us that the representatives from these various HHS agencies provide input on the work NQF is expected to perform, including determining quality measures requested from NQF for their respective programs. The activities and projects selected by the interagency workgroup become part of NQF’s scope of work under the contract. Some of the projects under the contract activities that NQF is expected to perform during the year will be ongoing from the previous contract year while new work will be incorporated into the work plan as necessary. For the NQF contract, HHS selected a cost-plus-fixed-fee contract, under which HHS reimburses NQF for actual costs incurred under the contract in addition to a fixed fee that is unrelated to costs. Cost-plus-fixed-fee contracts are used for efforts such as research, design, or study efforts where costs and technical uncertainties exist and it is desirable to retain as much flexibility as possible in order to accommodate change. However, this type of contract provides only a minimum incentive to the contractor to control costs. As we reported in 2009, these contracts are suitable when the cost of work to be done is difficult to estimate and the level of effort required is unknown. This cost-plus-fixed-fee contract is NQF’s first cost-reimbursement contract. For cost-reimbursement contracts, the Federal Acquisition Regulation (FAR) requires appropriate government surveillance during performance to provide reasonable assurance that efficient methods and effective cost controls are used. Under the FAR, contracts are to contain a provision for agency approval of a contractor’s subcontracts. HHS’s contract with NQF contains this provision and also requires the approval of consultants engaged under the contract. The review and approval of NQF’s use of subcontractors and consultants require appropriate support documentation provided by NQF to HHS, including a description of the services, the proposed price, and a negotiation memo that reflects the principal elements of the price negotiations between NQF and the subcontractor or consultant. Under its contract with HHS, NQF has utilized 31 subcontractors and 16 consultants since January 14, 2010, to provide support to NQF on many of the contract activities and associated projects. Two HHS components are principally responsible for administering the NQF contract: the office of the Assistant Secretary for Planning and Evaluation (ASPE) and CMS—an agency within HHS. Specifically, the project officer for the NQF contract is a representative of ASPE. This individual is responsible for program management and works with the contracting officer to oversee the contract. The contracting officer for the NQF contract, responsible for administering the contract, is a representative of CMS.contracting officer is required to perform, such as conducting an annual evaluation of the contractor’s performance. From January 14, 2010, through August 31, 2011, NQF made progress on projects under its contract activities. However, our review of NQF documents found that NQF had not met or did not expect to meet time frames on more than half of the projects, and it exceeded its cost estimates for projects under three of the contract activities. HHS did not use all tools for monitoring that are required under the contract. From January 14, 2010, through August 31, 2011, NQF has made progress on 60 of the 63 projects under the activities required under its contract with HHS. Specifically, NQF had completed 26 projects and was (App. III provides the continuing to work on the remaining 34 projects.status of all contract activities and the projects under each activity NQF was expected to perform during our reporting period.) Examples of projects under the contract activities include both completed and continuing projects: Endorsement of Measures Activity. NQF endorsed 101 measures since the beginning of the second contract year by conducting work on endorsement projects on different topic areas. Specifically, NQF completed two projects to endorse 38 outcome measures related to 20 high-priority conditions identified by CMS that account for the majority of Medicare’s costs, and mental health and child health conditions; and 21 performance measures for chronic and postacute care nursing facilities. NQF also worked on two projects related to child health quality and patient safety. As of August 2011, NQF endorsed 41 child health quality measures and 1 patient safety measure under these projects. NQF expected to complete the child health quality project in September 2011 and the patient safety project in December 2011. In addition, NQF completed a contractually required review of its endorsement process, subcontracting with Mathematica Policy Research, Inc. (Mathematica). The review focused on the timeliness and effectiveness of the endorsement process; identified inefficiencies, including those that may contribute to delays; and recommended, among other steps, that NQF create a schedule for its endorsement process for measure developers and develop feasible time lines that include clear goals for each endorsement project. HHS officials stated that Mathematica’s recommendations were valuable because much of the work under the NQF contract needs to be completed in an accelerated timeline to help fill critical measurement gaps associated with HHS’s health care quality programs and initiatives. For more information about this review, see appendix IV. Maintenance of Endorsed Quality Measures Activity. NQF maintained—that is, updated or retired—124 measures under the contract since the beginning of the second contract year. These included 41 measures reviewed under NQF’s 3-year review cycle related to diabetes, mental health, and musculoskeletal conditions. In addition, 83 measures were maintained under NQF’s other maintenance review processes. NQF was also continuing to work on maintenance projects it initiated in 2010 for measures related to cardiovascular and surgery measures. As of August 2011, the two projects were expected to be completed by December 2011 and January 2012, respectively. Promotion of the Development and Use of Electronic Health Records Activity. NQF has made progress on three projects related to retooling—that is, converting previously endorsed quality measures to an electronic format that is compatible with electronic health records (EHR). First, NQF completed initial retooling of 113 measures. This work is intended to allow data from EHRs to be used for quality measurement, which is a part of HHS’s long-term goal to use health information technology to exchange information and improve quality of care. Second, as of August 2011, NQF convened an expert review panel to review the retooled measures to ensure that each retooled measure is properly formatted, the logic is correctly stated, and the intent of the measures is maintained in the electronic format that will use data obtained from EHRs, instead of from claims as originally formatted. Third, as of August 2011, NQF was expected to complete another project to provide an updated list of the 113 retooled measures to HHS by December 2011, which would incorporate any revisions identified by the expert review panel and others involved in the retooling process. After these updated measures are completed, HHS officials told us that they will contract with other entities to conduct testing of some of the 113 retooled measures to assess the feasibility of implementing the measures in the electronic format. Although NQF’s endorsement process requires that measure developers submit data on validity and reliability testing of measures they submit for endorsement, this testing does not include feasibility testing for implementing the measures in an electronic format for performance measurement. As of December 2011, HHS officials did not provide an expected date of completion for this feasibility testing but told us that they have awarded two contracts that include this in their scope of work. In addition to the retooling projects, NQF is developing a software tool—the Measure Authoring Tool—to allow measure developers to create standardized electronic measures that help capture information in EHRs so that less retooling would be needed in the future. As of August 2011, NQF was completing final testing of the beta, or initial, version of this tool. NQF expected to complete testing and publish an updated version for public use by January 2012. Multistakeholder Input into HHS’s National Quality Strategy Activity. NQF convened the National Priorities Partnership (NPP), a multistakeholder group expected to provide annual input on national priorities, among other things, to be considered in the National Quality Strategy. As of August 2011, the NPP was completing a report on this input, which was then published in September 2011. The report noted the need for a national comprehensive strategy that identifies core sets of standardized measures to meet each of the national priorities HHS identified in the 2011 National Quality Strategy, among other things. The NPP noted in the report that a common data platform, core measure set, and public reporting mechanism are key components of the infrastructure for performance measurement. It also highlighted that a strategic plan, road map, and timeline for establishing an infrastructure should be accelerated to allow for rapid implementation over the next 5 years. Additionally, the NPP reported that it was critical that all federal programs drive toward the establishment of a common platform for measurement and reporting. Multistakeholder Input on the Selection of Quality Measures Activity. NQF has convened the Measure Applications Partnership (MAP). The MAP is a multistakeholder group that is expected to conduct work in two areas. First, the MAP is expected to provide input to the Secretary of HHS on the selection of quality measures for use in payment programs and value-based purchasing programs required by PPACA, among others. The MAP will review a list of measures published by the Secretary of HHS on December 1 of each year, and develop a report that contains a framework to help guide measure selection. The MAP will provide its annual input beginning February 1, 2012, for measures used in the following 11 programs: hospice, hospital inpatient, hospital outpatient, physician offices, cancer hospitals, end-stage renal disease (ESRD) facilities, inpatient rehabilitation facilities, long-term care hospitals, hospital value-based purchasing, psychiatric hospitals, and home health care. Second, the MAP is expected to publish reports that provide input on the selection of measures for use in various quality reporting programs, including those for physicians. As of August 2011, the MAP had held meetings and initiated its work for reports due October 1, 2011. Other Health Care Quality Measurement Activity. NQF completed a project to endorse six imaging efficiency measures. NQF was also continuing to work on a project to identify existing quality measures and gap areas related to measurement of regionalized emergency care services. Our review of NQF documents found that NQF had not met or did not expect to meet time frames on more than half of the projects under the contract activities that were completed or ongoing, as of August 2011. Specifically, our review of documents found that NQF had not met expected time frames on 18 of the 26 projects it completed under the nine contract activities. Further, NQF did not expect to meet time frames on 14 of the 34 projects on which it was continuing to work. The delays of these projects under the contract activities varied in time from about 1 to 12 months. HHS officials told us they approved all changes to the time frames, which were established by HHS and NQF in NQF’s 2010 and 2011 final annual MIPPA work plans and the PPACA technical proposal. Appendix III provides the status for all projects related to each of the nine contract activities, including information on their expected and actual time frames for completion during our reporting period. Examples of projects under the contract activities for which NQF did not meet or did not expect to meet expected time frames include the following: Endorsement of Measures Activity. NQF did not meet or did not expect to meet time frames for all five endorsement projects under the endorsement contract activity. (See app. III for details on the five projects.) For example, NQF was expected to complete an endorsement project for nursing home quality measures in July 2010; however, the measures were not endorsed until February 2011. (See fig. 1 for estimated time frames and actual completion dates for all projects related to the endorsement contract activity.) NQF officials stated that several factors contributed to NQF exceeding the expected time frames for the five endorsement projects, including the high volume of measures submitted for review, the amount of time it took to harmonize measures between measure developers, and a need for additional technical expertise on review panels. Promotion of the Development and Use of Electronic Health Records Activity. NQF did not meet or did not expect to meet expected time frames for five out of eight projects related to the EHR contract activity. For example, NQF was expected to complete its initial retooling of 113 endorsed quality measures into electronic formats by September 2010, but this effort was not completed until December 2010. (See fig. 2 for estimated time frames and actual completion dates for all projects related to the EHR contract activity.) In addition, NQF was expected to complete the project to convene an expert panel to review the 113 retooled measures by January 2011. However, the panel did not complete its review of the 113 measures until June 2011. According to HHS and NQF officials, several factors contributed to NQF exceeding expected time frames for the retooling project under the EHR contract activity. HHS officials stated that the first set of 44 retooled measures submitted had errors that required correction. For example, HHS officials stated that they found errors in the electronic coding of these 44 retooled measures requiring NQF and its subcontractors who retooled the measures to make corrections. In addition, HHS and NQF officials stated that after starting the retooling project, they quickly learned that the estimated time frames for the retooling project, as well as other projects related to the EHR activity, were overly ambitious, given the scope and complexity of the work. For example, HHS officials noted that retooling of quality measures into electronic format had never been attempted before and the technical complexity and labor required to complete the project were greater than anticipated. NQF officials also told us that HHS’s requests to modify the scope of work for this project often required changing the time frame for completing the retooled measures. These factors resulted in an extension of the project that delayed the final delivery of the 113 retooled measures as well as contributed to the need for additional staff at NQF. Other Health Care Quality Measurement Activity. NQF was expected to complete two projects under the other health care quality measurement activity related to efficiency and resource use—one white paper on resource use and another on geographic-level efficiency by July 2010. These white papers were intended to provide information for an endorsement project on resource-use measures that began in January 2011. However, as of August 2011, the resource-use paper was still under review by HHS, and NQF officials stated they expected to receive comments in September 2011. The geographic-level efficiency paper was canceled in June 2010 at the request of HHS. NQF initially intended to subcontract the work on these two projects, but officials told us that they were unable to identify a subcontractor at the level of funding approved for this project. As a result, HHS approved NQF’s proposal to complete this work internally. HHS officials stated that the drafts NQF submitted on both topics were poor in quality and did not meet its needs, resulting in HHS requesting additional revisions for the resource-use white paper that delayed its completion, and requesting the cancellation of the geographic-level efficiency white paper. Administrative Activity. NQF did not meet the expected time frames for completing one of the required projects under the administrative activity—finalizing its annual work plan. Specifically, the NQF contract requires NQF to develop an annual work plan and to receive final approval from HHS within the first 4 weeks of each contract year; however, NQF did not meet this requirement in 2010 or 2011. For example, the final 2011 MIPPA annual work plan was not developed by NQF and approved by HHS until April 1, 2011. According to NQF and HHS officials, the 2011 MIPPA work plan was not developed and approved on time due to extended discussions on the scope and cost estimates of NQF’s EHR activities. HHS officials told us that the primary reason for the extended discussions was that they expected the costs to reflect all the work needed to complete the Measure Authoring Tool (MAT) by the end of the second contract year. However, they said that NQF only submitted a beta version of the tool by the end of the second contract year, which was not the version expected by HHS. NQF officials told us that the version was never intended to be final but rather a beta version, consistent with their understanding of HHS’s expectations. As a result, HHS and NQF officials needed to evaluate the scope of work and cost estimates for this and other projects. Further, NQF officials told us the delay in completing the 2011 MIPPA annual work plan resulted in the interruption of NQF’s ongoing work related to the MAT under the EHR contract activity. The delay also delayed its receipt of funding for some new or ongoing work under the contract. In some instances, NQF chose to start new or continue ongoing work with its own funding. For example, NQF officials stated that NQF began work related to the MAP using its own funds until HHS authorized the work. In addition, the delay in completing the 2011 MIPPA work plan resulted in the need to set the start date for fall 2011 rather than earlier in the contract year for some of the projects under the maintenance activity. NQF also exceeded its cost estimates for projects under three of the contract activities. HHS officials told us they approved the changes to the cost estimates and in some cases modified NQF’s scope of work to help ensure that NQF’s costs did not exceed the amount HHS had obligated for the contract activities. NQF officials stated that in certain cases, not meeting expected time frames contributed to NQF exceeding these cost estimates. For example, the delays in projects related to the EHR contract activity, including expanding the scope of the retooling project, contributed to NQF exceeding its cost estimate of about $3.8 million for the entire EHR contract activity by about $560,000 in the second contract year. In another example, the delays in finalizing the 2010 and 2011 MIPPA work plans contributed in part to NQF exceeding its cost estimate for developing and finalizing these plans, which is a project under the administrative contract activity. Specifically, while NQF estimated that completion of the annual work plan would cost approximately $77,000, NQF reported an actual cost of $176,590. In addition, NQF also exceeded its cost estimate for the endorsement contract activity during the second contract year for various reasons, including a need for additional technical experts for review panels. Specifically, NQF exceeded estimated costs of about $3.1 million for the entire endorsement activity by about $146,000 While HHS officials told us they approved in the second contract year.all changes to the cost estimates, in certain cases they reduced the scope of NQF’s work in 2011 to ensure that total available funding for the contract year was not exceeded and that sufficient funding was available for ongoing projects. For example, HHS officials told us that they had hoped to start several new endorsement projects beginning in 2011; however, these projects were not included in the 2011 final annual MIPPA work plan so that funding would be available for NQF to complete its ongoing projects, including work that was delayed under the EHR contract activity. In addition, HHS requested that NQF discontinue its work on one project related to the development of a public website for 2011, which is associated with the administrative contract activity. HHS officials told us that to help monitor NQF’s performance on the projects under the contract activities, they rely on NQF to report any issues, including those related to time frames or cost estimates, in the monthly progress reports that NQF is required to submit to HHS or during phone calls held at least monthly. While HHS monitored NQF’s progress and approved changes to the time frames and cost estimates for the projects under the contract activities, HHS did not use available tools for monitoring that are required under NQF’s contract. These tools could have helped to provide an opportunity for HHS to make any appropriate changes to NQF’s projects. For example, HHS did not conduct an annual performance evaluation required by the contract that would assess timeliness and cost control issues, among other things, for the previous contract year. The results of such an evaluation could help HHS officials to consider potential timeliness and cost issues when determining NQF’s scope of work for the next year. Further, while monthly progress reports and invoices include information on NQF’s costs, these documents do not compare reported costs to initial cost estimates. HHS officials told us that, prior to August 2011, they had not enforced a contractual requirement for NQF to submit—nor had it received from NQF—a financial graph in its monthly progress reports that provides information comparing NQF’s monthly incurred costs for each of the contract activities with initial cost estimates. Instead, HHS officials informally requested that NQF provide them with the financial status of the contract activities in midyear 2010, which helped them to plan for NQF’s work under the contract for 2011. Having a financial graph in the monthly progress report could have helped HHS officials to identify instances where any contract activity was approaching or exceeding NQF’s initial cost estimates prior to HHS’s midyear review. This, in turn, could have provided HHS and NQF an opportunity to adjust estimates of future costs for these or related activities earlier in the contract year. HHS officials had asked NQF to begin to include such a financial graph in its monthly progress reports beginning in August 2011. From January 14, 2010, through August 31, 2011, NQF reported a total of approximately $22.4 million in costs and fixed fees on monthly invoices submitted to HHS for projects under activities conducted in response to MIPPA and PPACA. Specifically, NQF reported about $12.8 million in total costs and fixed fees for the contract activities it performed during the second contract year—January 14, 2010, through January 13, 2011. From January 14, 2011, through August 31, 2011, part of the third contract year, NQF reported an additional $9.6 million in total costs and fixed fees. During the second contract year, the majority of NQF’s reported costs were related to the promotion of the development and use of EHRs (36 percent, or $4.6 million) and endorsement of health care quality measures (26 percent, or $3.3 million). Figure 3 illustrates the costs and fixed fees NQF reported for eight of the nine contract activities we reviewed that occurred during the second contract year. The ninth contract activity relates to multistakeholder input on the selection of quality and efficiency measures, as directed by PPACA. This contract activity did not begin until after January 14, 2011, which is the start of the third contract year. For the part of the third contract year covered in our review—January 14, 2011, through August 31, 2011—almost one-half of NQF’s reported costs were for the activity to promote the development and use of EHRs and for the activity to provide multistakeholder input on the selection of quality and efficiency measures. Each of these activities accounted for 22 percent, or about $2.1 million of NQF’s reported costs. Other costs reported by NQF include those for the activity related to providing multistakeholder input into HHS’s annual National Quality Strategy ($1.55 million, or 16 percent) and those for the activity related to the maintenance of endorsed quality measures activity (13 percent, or $1.29 million). Figure 4 illustrates the costs and fixed fees NQF reported for the part of the third contract year covered in our review for each of the nine contract activities we reviewed. According to HHS, as of August 2011, about $55.2 million remains available for the NQF contract. About $15.1 million in MIPPA funding remains available for work to be conducted through January 13, 2013. In addition, HHS plans to obligate approximately $40.1 million of its PPACA funding through 2014 for NQF’s activities related to health care quality measurement in response to PPACA. For its various programs or initiatives, HHS has used or planned to use about one-half of the quality measures that NQF has endorsed, maintained, or retooled under the contract, as of August 31, 2011, and HHS officials expect to evaluate if and how the remaining measures will be used. However, HHS has not comprehensively determined how it will use NQF’s work under the contract to implement PPACA requirements related to quality measures. According to HHS officials, HHS has used or planned to use about one- half (164) of the 344 health care quality measures it has received from NQF through various endorsement, maintenance, and retooling projects under the contract, as of August 31, 2011. For example, of the 164 measures used or planned for use, 44 were used in CMS’s Medicare and Medicaid EHR Incentive Program after being retooled—that is, converted to an electronic format that is compatible with EHRs—under the NQF contract. Although these 44 retooled measures were used in the EHR Incentive Program, HHS officials stated that NQF and HHS detected coding and other errors in the versions of the 44 retooled measures that were published in the program’s final rule in July 2010 that required NQF to make corrections to them after publication of the final rule. NQF did not submit the revised versions of the 44 retooled measures published in the final rule to HHS until December 2010. HHS officials stated that because the final rule had already been published prior to receiving the final formatted measures, CMS listed general guidance on its website to address the errors. HHS officials told us that these 44 measures are being used but have not yet been tested to assess the feasibility of implementing them in the electronic format. Until the testing is complete, HHS runs the risk that some of these measures may not work as intended when implemented in electronic format for performance measurement. As a result, the agency does not have reasonable assurance that the retooled versions of the measures will correctly capture information from EHRs. In addition to the 44 retooled measures used in the EHR Incentive Program, HHS also has used or planned to use 120 measures that it received from endorsement and maintenance projects under the NQF contract for various HHS programs and initiatives. (See table 2 for details on specific programs in which HHS has used or planned to use health care quality measures received from NQF under the contract.) HHS officials told us that they expect to evaluate if and how they could use all of the remaining 180 of the 344 quality measures that were endorsed, maintained, or retooled under the NQF contract that are not currently in use or planned for use in HHS programs or initiatives. According to HHS officials, any measure developer can submit a measure to be considered for NQF endorsement. Therefore, all the measures received under the contract may not be applicable to a particular HHS health care quality program or initiative. HHS officials told us that they will review the remaining 180 measures to determine if they are applicable to their health care quality programs or initiatives. The officials expect that many of these measures will be used in HHS programs or initiatives required by PPACA. For example, HHS officials told us that they will consider implementation of most of the retooled measures in future stages of the EHR Incentive Program. In addition, PPACA directed HHS to establish a hospital value-based purchasing program, as well as to make plans or begin pilot programs for value-based purchasing in other settings of care. The hospital value-based purchasing program will use various quality measures and depend on the information collected on them to determine payments to providers. PPACA also required the development of no less than 10 provider-level outcome measures for hospitals and physicians by March 2012. Further, PPACA directed HHS to identify quality measures that could be used to evaluate hospice programs and publish these measures by October 1, 2012. HHS officials told us that they are in the process of determining whether or to what extent the remaining 180 measures HHS has received under the NQF contract can be used to address the new measurement needs and priorities established by PPACA. HHS officials told us that they prefer to use NQF-endorsed measures to meet HHS’s measurement needs because these quality measures are nationally recognized standards and in some cases HHS is required to use them. Although HHS has taken steps to determine how it can use the measures received under the contract with NQF, the agency does not have a comprehensive plan for determining how it will use the remainder of the work conducted under NQF’s contract to implement PPACA requirements, including plans for additional quality measures that need to be endorsed during the remaining contract years. HHS officials told us that HHS determines on an annual basis which activities—including work on quality measures—NQF is to perform under the contract through the interagency workgroup. The workgroup is comprised of representatives from various HHS agencies and allows them to provide input on their needs, including quality measures that need endorsement from NQF, for their respective programs. However, HHS officials told us that each HHS program assesses its quality measurement needs separately and provides varying levels of detail about its needs. Therefore, it is unclear the extent to which all programs consistently incorporate PPACA’s quality measurement requirements and deadlines into these assessments. The NPP’s September 2011 report noted the importance of greater alignment of national quality measurement efforts, including the establishment of a comprehensive measurement strategy that identifies core measure sets, among other things. In addition, the report noted that all federal programs should work toward the establishment of a common platform for measurement and reporting. Without a comprehensive plan that delineates HHS’s quality measurement needs, and given that each program assesses its quality measurement needs separately, the interagency workgroup may not be able to systematically ensure that all of HHS’s quality measurement needs that implement PPACA requirements align with the selection and prioritization of activities for NQF to complete under the contract. While HHS has begun various efforts to assess its quality measurement needs, the lack of a plan that comprehensively determines the impact of PPACA on its needs could affect the agency’s progress on its quality measurement efforts as well as how it selects and prioritizes NQF’s contract activities. Officials told us that prior to PPACA’s enactment, CMS maintained a 5-year plan that listed its measurement needs based on agency priorities and the priorities established by the NPP for some of its programs.to reflect the requirements related to quality measurement and time frames established by PPACA. In March 2011, HHS published the National Quality Strategy as required by PPACA, which included six priority areas of focus. The report was required by PPACA to include agency-specific plans, goals, benchmarks, and standardized quality metrics for each priority area, but did not do so. HHS officials stated that this document describes HHS’s initial plan for these elements and that they may be included in future versions of the strategy. In June 2011, HHS officials told us that they plan to convene a Quality Measurement Task Force within CMS with a goal to comprehensively align, coordinate, and approve the development, maintenance, and implementation of health care quality measures for use in various CMS programs. As of August 2011, the task force was in an early stage of development, and therefore it is too early to determine whether it will accomplish its goal. Although these various HHS efforts are key steps toward helping the agency meet its quality measurement needs, they are not guided by a comprehensive plan that synthesizes key priority areas identified in various sources, such as those reported by the NPP or in the National Quality Strategy, for which measures may be needed. Without such a plan, HHS may be limited in its efforts to prioritize which specific measures it needs to develop and have endorsed by NQF for its health care quality programs and initiatives established by PPACA. As a result, HHS may be unable to ensure that the agency receives the quality measures needed to meet PPACA requirements and specified time frames related to quality measurement. Health care quality measures are increasingly important to HHS as it uses and will continue to use them in its existing and forthcoming programs and initiatives to evaluate health care delivery. For example, HHS’s value- based purchasing programs are pay-for-performance programs that will require providers to collect and report information on health care quality measures and adjust payment levels based on providers’ performance against the measures. PPACA has increased HHS’s quality measurement needs, and the time frames specified in the law have also increased the urgency of obtaining endorsed quality measures—which are nationally recognized standards and in some cases are required by statute—to meet these needs. Given that NQF is the entity in the United States with lead responsibility for endorsing health care quality measures, NQF’s endorsement activities under the contract are of key importance to help meet HHS’s quality measurement needs. However, NQF’s endorsement process takes time. For more than half of the projects, including all five projects in the endorsement activity, NQF did not meet or did not expect to meet the initial time frames approved by HHS. In addition, projects under three of the contract activities have exceeded initial cost estimates, which resulted in HHS’s modification of NQF’s scope of work in some instances to help ensure that NQF’s costs did not exceed the funding allocated for the contract activities. While HHS received information in monthly progress reports to help monitor NQF’s performance under the contract, the agency did not use all of the monitoring tools required under the contract to help address issues related to time lines and cost estimates. These monitoring tools included an annual performance evaluation that could help HHS officials consider potential issues related to NQF’s time frames and cost estimates when planning work for the next year and a financial graph to be included in NQF’s monthly progress reports. The graph would have compared reported costs to initial cost estimates, which is something that monthly progress reports do not do. Although HHS officials reported that they recently began in August 2011 to enforce the contractual requirement for NQF to submit the graph, they have not implemented the required annual performance evaluation. By not taking advantage of these tools, HHS runs the risk of not having detailed and timely information that could help identify instances in which NQF might be at risk of not meeting time frames or exceeding estimated costs. Identifying such instances could provide an opportunity for HHS to make any appropriate changes to NQF’s scope of work, including setting priorities to ensure that HHS receives the quality measures it needs in a timely manner. With the time remaining under the contract, HHS has an opportunity to ensure that the work performed under NQF’s contract better meets the agency’s needs for its programs and initiatives. However, HHS has not developed a plan that comprehensively identifies its quality measurement needs for its programs and initiatives in light of PPACA’s requirements or determines how it will use the work conducted during the remaining years of the NQF contract to help it meet these needs. In addition, critical tasks may need to be completed outside of the NQF contract. For example, HHS requested that NQF retool 113 measures under the contract and used 44 of the 113 measures that included errors in its EHR Incentive Program. As of November 2011, feasibility testing related to implementation of the retooled measures had not been completed, and HHS expected to perform this work outside of the NQF contract. Until the testing is completed, HHS runs the risk that some of the retooled measures may not work as intended when implemented in electronic format for performance measurement, which is a concern because use of these measures is an important component of HHS’s long-term goal for providers to use health information technology (IT) to exchange information and improve the quality of care. Without a comprehensive plan, HHS lacks assurance that its selection of the work to be performed by NQF—and the approximately $55.2 million that the agency expects to spend for remaining work under the NQF contract—will be prioritized in the most effective way possible. Given that PPACA includes time frames for the implementation of quality measurement programs, NQF’s pace in completing some of the work under the contract—particularly the endorsement activity—raises concerns. If the endorsement projects continue to require extended completion times, HHS runs the risk of not having all the endorsed measures it needs for implementing its programs and initiatives. Should this occur, HHS may need to select nonendorsed measures for its programs and initiatives that have not undergone an objective and transparent review by NQF. To help ensure that HHS receives the quality measures it needs to effectively implement its quality measurement programs and initiatives within required time frames, we recommend that the Secretary of HHS take the following three actions: use monitoring tools required under the NQF contract to obtain detailed and timely information on NQF’s performance and use that information to inform any appropriate changes to time frames, projects, and cost estimates for the remaining contract years; ensure that testing of the electronic versions of the measures retooled by NQF that are being used or are planned for use in the Medicare and Medicaid EHR Incentive programs is completed in a timely manner to help identify potential errors and address issues of implementation; and develop a comprehensive plan that identifies the quality measurement needs of HHS programs and initiatives, including PPACA requirements, and provides a strategy for using the work NQF performs under the contract to help meet these needs. We provided a draft of this report to HHS and NQF for review and comment. HHS neither agreed nor disagreed with our recommendations and provided general comments. NQF concurred with many of the findings in the report and provided clarification and additional context on the findings and recommendations. HHS and NQF’s letters conveying their comments are reproduced in appendixes V and VI, respectively. In addition to the overall comments discussed below, we received technical comments from HHS and NQF, which we incorporated into our report as appropriate. HHS’s comments included separate general comments from CMS and ASPE that provided context on aspects of our findings and recommendations. CMS’s comments stated that the draft report suggests that CMS must use all of the measures endorsed by NQF, and noted that not all NQF-endorsed measures are suitable for HHS quality reporting and public reporting programs. Although our draft report did not state that CMS must use all of the measures endorsed by NQF, we modified it to note specifically, among other things, that all measures received under the contract may not be applicable to a particular HHS health care quality program or initiative. CMS also stated that the report suggests that CMS has not developed measurement plans for various provisions of PPACA related to quality reporting, public reporting, and value-based purchasing programs. CMS provided additional context for current planning efforts to address these requirements, including its Quality Measurement Task Force. The draft report acknowledged this and other CMS planning efforts to address the health care quality requirements contained in PPACA and noted that, as of August 2011, this initiative was just beginning. Further, while various efforts are underway and CMS’s comments state that it has documented how quality measures will be used to address all relevant provisions of PPACA, CMS has not provided documentation of comprehensive plans to address PPACA requirements that include alignment across programs, detailed time frames to meet PPACA deadlines, or how it will use the NQF contract to help ensure that it receives the endorsed measures it needs to meet these requirements. ASPE’s comments noted, with respect to our first recommendation, that HHS used all except two of the monitoring tools called for in the contract. As noted in the draft report, HHS began receiving the monthly financial graph—one of the two monitoring tools—from NQF in August 2011. Also, ASPE noted its plans to update its performance evaluation system with NQF performance information for the first 2 contract years—the period January 14, 2009, through January 13, 2011—and to complete a final performance evaluation at the end of the contract in January 2013, which is the end of the fourth contract year. It did not indicate any plans to conduct the annual performance evaluation for the third contract year— January 14, 2011, through January 13, 2012—which would be consistent with the contract’s requirements. With respect to our second recommendation, ASPE provided technical comments and also told us that CMS issued a contract solicitation to test the retooled measures, but CMS did not receive any bids. Instead, ASPE noted in its comments that two of CMS’s current contractors will conduct feasibility testing on 69 of the 113 retooled measures that are planned for use in HHS’s EHR Incentive programs. CMS does not plan to issue a solicitation for a new contract to test the feasibility of the remaining 44 retooled measures, which are currently being used in HHS’s EHR Incentive Program. We noted these comments in the report. Regarding our third recommendation, ASPE stated that the measures that are not currently in “use” are being evaluated by HHS and that any conclusions that they will not be used are not accurate. Our draft report provided information on which measures were used or planned for use as of August 2011, and indicated that the remaining measures may be used in the future. Specifically, the report noted that HHS officials expect that many of these measures will be used in HHS programs or initiatives, and that HHS officials told us that they will review all the measures received under the contract to determine if they are applicable to their health care quality programs or initiatives. ASPE’s comments also noted that our draft report did not include information on all NQF-endorsed measures used by the various agencies within HHS. As noted in the draft report, we relied on HHS to identify programs and initiatives across HHS that use or plan to use these health care quality measures and recognize that those included in our report may not represent a comprehensive list of all health care quality programs and initiatives. As we recommended in our report, having a comprehensive plan could help HHS identify programs or initiatives that use or plan to use health care quality measures, including those endorsed by NQF. NQF’s comments state that it is providing its services to HHS under a cost reimbursement contract, which is used in circumstances where aspects of performance, such as time frames, cost estimates, and scope of work, cannot be reasonably estimated, and therefore, should not be expected. As noted in the draft report, the contract type used for this work is used for efforts such as research, design, or study efforts where costs and technical uncertainties exist and it is desirable to retain as much flexibility as possible in order to accommodate change. However, the draft report also noted that this type of contract provides only a minimum incentive to the contractor to control costs. Given the risk associated with this type of contract, the fact that NQF has not met expected time frames on about half of its projects as of August 2011, and that NQF exceeded its initial cost estimates for some of its projects under its contract activities, it is especially important that HHS obtain detailed and timely information on NQF’s performance and use that information to inform any appropriate changes to time frames, projects, and cost estimates for the remaining contract years, as noted in our recommendations. NQF’s comments also state that time frames and costs for the work performed under the contract were initial estimates based on an early understanding of the work, that HHS and NQF understood that there would likely be changes to them as a result of the complexity and novelty of the work, and that they have worked collaboratively throughout the contract period to address these and other factors. As noted in the draft report, the final work plans, the technical proposal, and other documents that we reviewed included initial time frames for all projects and costs for the work performed during the contract year that were approved by HHS in collaboration with NQF. The draft report also notes several examples of reasons why the time frames and costs were modified over time. Contributing factors include the high volume of measures submitted, changes to the scope of work, and the novelty and complexity of the work. We are sending copies of this report to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at [email protected]. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix VII. Figure 5 illustrates a health care quality measurement framework of the various stages that a quality measure will go through, as described by the Department of Health and Human Services (HHS) and National Quality Forum (NQF) officials and others. These stages include measure development, endorsement, selection, and use, among others. This framework also shows examples of which entities, including HHS and NQF, are involved in each of the stages. As an example of actions taken during the second stage of this quality measurement framework, HHS officials described two different processes used for planning and identifying gap areas. The Centers for Medicare & Medicaid Services (CMS) Office of Clinical Standards and Quality has developed a standardized approach to identify quality measures that it uses in its health quality initiatives and programs using CMS’s Measures Management System. The Measures Management System requires the convening of a technical expert panel in the initial planning stage. Once convened, the technical expert panel is expected to work with measure developers who will gather information that will help the panel determine whether measures need to be developed for a program or initiative. During this stage, measure developers may conduct environmental scans or literature reviews, to determine the existence of measures that could be used for a program or initiative. If a measure does not exist, then the developer will work with CMS to develop the needed measures for the program or initiative, including measure testing. Upon development of the measures, the technical expert panel will evaluate them based on (1) importance to making significant gains in health care quality and improving health outcomes, (2) scientific acceptability of the measure properties including tests of reliability and validity, (3) usability, and (4) feasibility. Measures recommended by the panel are generally submitted for NQF endorsement. In contrast, CMS’s Center for Medicaid, Children’s Health Insurance Program (CHIP), and Survey & Certification Office—the CMS center which implements CHIP—uses a measure identification process that relies on existing measures rather than development of new measures, according to officials. This office worked with a technical advisory group, the Subcommittee on Children’s Healthcare Quality Measures for Medicaid and CHIP, to recommend an initial core set of measures for the CHIP. With assistance from CMS, the subcommittee evaluated measures based on importance, validity, and feasibility. CMS officials told us that they considered existing NQF-endorsed and non-NQF-endorsed measures based on the measurement needs of the program, and relied on measure testing conducted by the measure developers. Officials stated that they have also relied on the subcommittee to evaluate candidate measures for Medicaid child health programs. Officials said that they are not required to submit measures that will be used for Medicaid programs for NQF endorsement. From January 14, 2010, through August 31, 2011, the National Quality Forum’s (NQF) contract with the Department of Health and Human Services (HHS) included 16 tasks that NQF is required to perform. For purposes of our work, we categorized these tasks into nine contract activities. Specifically, in certain cases, we grouped activities that covered related areas of work into a single contract activity. For example, we consolidated the six administrative activities NQF is required to perform into a single contract activity. (See table 3 that shows how we consolidated these contract activities.) NQF was required to perform specific projects under the nine contract activities we identified. For example, under the endorsement contract activity, NQF was required to complete an endorsement project related to patient outcome measures. For purposes of our work, we identified and reviewed 63 projects NQF is required to perform under the nine contract activities, as shown in appendix III. The tables below provide a status update on the projects that the National Quality Forum (NQF) is required to complete under the nine contract activities we identified (see app. II). The contract activities and the projects under the activities NQF is expected to perform are determined on an annual basis by the Department of Health and Human Services (HHS) and NQF. As a result, the number of projects under the contract activities varies by contract year. For our reporting period—January 2010 through August 2011—we determined that NQF was required to conduct work on 63 projects under the contract activities we reviewed. To determine initial time frames for each project, we calculated the approximate time between expected start and end dates established in NQF’s 2009, 2010, and 2011 final annual Medicare Improvements for Patients and Providers Act of 2008 (MIPPA) work plans, the 2011 Patient Protection and Affordable Care Act (PPACA) technical proposal, and other NQF documents. Actual time frames were determined by calculating the approximate time between the actual start date and the actual date of completion. For projects that were not yet complete as of August 2011, we included an expected time frame based on the approximate difference between the actual start date and the expected date of completion. NQF and HHS officials stated that any changes to the initial time frames were approved by HHS. As part of a project under its contract with the Department of Health and Human Services (HHS), NQF was required to review its endorsement process. To complete this project, the National Quality Forum (NQF) subcontracted with Mathematica Policy Research, Inc. (Mathematica), to conduct a review of NQF’s endorsement process, as requested by HHS. HHS officials stated that, given the importance of the endorsement process as part of the health care quality measurement framework, they requested that an objective and thorough review of NQF’s endorsement process that focused on timeliness, efficiency, and effectiveness should be conducted. For example, they stated that they were interested in whether there were any efficiencies that could be implemented to shorten the process while maintaining an objective review of the health care quality measures that were evaluated under the process. Mathematica initiated its review of NQF’s endorsement process in October 2009 and completed the work in December 2010. In December 2010, Mathematica submitted a final report to NQF and recommended eight areas where improvements could be made and inefficiencies could be addressed in the endorsement process. In the final report, Mathematica noted that the current process is lengthy and the timeliness of the endorsement projects varies substantially. The report further noted that the length of the endorsement process affects the availability of endorsed measures for end users, such as HHS. To help reduce the time required to complete projects, Mathematica recommended that NQF create a schedule for its endorsement process for measure developers and develop feasible time lines that include clear goals for each endorsement project. As of May 2011, NQF officials stated that NQF has taken steps or plans to take steps in its future projects to address the eight areas for improvement Mathematica identified. For example, as of May 2011, NQF has solicited measures earlier based on a tentative annual project schedule to reduce the time lines of its endorsement process and reduced the period for voting by NQF member organizations from 30 to 15 days. NQF officials stated that they believe their efforts to implement the recommendations will shorten the time lines for the endorsement projects by 3 to 4 months without compromising the integrity of the endorsement process and measures to be evaluated under the process. HHS officials stated Mathematica’s recommendations were valuable because much of the work under the NQF contract needs to be completed in an accelerated time line to help fill critical measurement gaps associated with HHS’s health care quality programs and initiatives. They noted that it is too soon to tell the effects of these changes on the endorsement process, but they plan to monitor implementation of the changes in NQF’s 2011 endorsement projects under the contract. In addition, as of September 2011, HHS approved a new project under the contract to identify how the endorsement process can best align with HHS’s time frame for needed measures. As part of this project, NQF is expected to work with a consulting group to identify key performance metrics and define milestones and time lines to help streamline its endorsement process. In addition to the contact named above, Will Simerl, Assistant Director; La Sherri Bush; Krister Friday; Amy Leone; Carla Lewis; John Lopez; Elizabeth Martinez; Lisa Motley; Teresa Tucker; Carla Willis; and William T. Woods made key contributions to this report. | The Medicare Improvements for Patients and Providers Act of 2008 (MIPPA) directed the Department of Health and Human Services (HHS) to enter into a 4-year contract with an entity to perform various activities related to health care quality measurement. In January 2009, HHS awarded a contract to the National Quality Forum (NQF), a nonprofit organization that endorses health care quality measuresthat is, recognizes certain ones as national standards. In 2010, the Patient Protection and Affordable Care Act (PPACA) established additional duties for NQF. This is the second of two reports MIPPA required GAO to submit on NQFs contract with HHS. In this reportwhich covers NQFs performance under the contract from January 14, 2010, through August 31, 2011GAO examines (1) the status of projects under NQFs required contract activities and (2) the extent to which HHS used or planned to use the measures it has received from NQF under the contract to meet its quality measurement needs, as of August 2011. GAO interviewed NQF and HHS officials, reviewed relevant laws, and reviewed HHS and NQF documents. NQF has made progress on projects under its contract activities, as of August 2011. Specifically, NQF has completed or made progress on 60 of 63 projects. For example, NQF has completed projects to endorse measures related to various topics, including nursing homes. However, for more than half of the projects, NQF did not meet or did not expect to meet the initial time frames approved by HHS. For example, NQF completed one project to retool measuresthat is, convert previously endorsed quality measures to an electronic format. While the retooling project was expected to be completed by September 2010, its completion was delayed by 3 months. NQF and HHS officials identified various reasons that contributed to this delay, including an expansion of the projects scope and complexity. As a result of the delay, HHS did not have all the retooled measures it expected to include in its Electronic Health Records (EHR) Incentive Program. The delay of this project was also a contributing factor to NQF exceeding its estimated cost for its entire contract activity related to EHR by about $560,000 in the second contract year January 14, 2010, through January 13, 2011. While HHS monitored NQFs progress through monthly progress reports and approved changes to time frames and costs, HHS did not use all of the tools for monitoring that are required under the contract. Specifically, HHS did not conduct an annual performance evaluation to assess timeliness and cost issues that could have helped to inform NQFs future scope of work. Until August 2011, HHS did not enforce the provision for NQF to submit a financial graph to compare monthly costs for each contract activity with cost estimates, which is information not included in monthly progress reports. These tools could have provided additional, more detailed information to help identify instances in which NQF might have been at risk of not meeting time frames or exceeding cost estimates, which could have provided HHS an opportunity to make any appropriate changes to NQFs activities. HHS had used or planned to use about half of the measures164 of 344that it received from NQF under the contract, as of August 2011. For example, HHS used 44 measures that NQF retooled under the contract in its EHR Incentive Program. HHS officials stated that the 44 measures used in the program contained errors, which required corrections. HHS officials also have not yet tested the retooled measures to assess the feasibility of implementing them in the electronic format; therefore, HHS runs the risk that some of these measures may not work as intended when implemented. HHS officials told GAO they expect to evaluate if and how they could use all of the remaining measures HHS received under the contract. However, HHS has not determined how PPACA requirements for quality measurement may have changed its needs for endorsed quality measures. As a result, HHS has not established a comprehensive plan that identifies its measurement needs and time frames for obtaining endorsed measures and that accounts for relevant PPACA requirements. Without such a plan, HHS may be limited in its efforts to prioritize which specific measures it needs to develop and to have endorsed by NQF during the remainder of the NQF contract. As a result, HHS may be unable to ensure that the agency receives the quality measures needed to meet PPACA requirements, including time frames for implementing quality measurement programs. GAO recommends HHS: (1) use all monitoring tools required under the contract to help address NQFs performance, (2) complete testing of retooled measures, and (3) comprehensively plan for its quality measurement needs. HHS neither agreed nor disagreed with these recommendations. NQF concurred with many of the findings in the report and provided additional context. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
In the 1980s and early 1990s, the solvency of the federal depository insurance funds was threatened when hundreds of thrifts and banks failed. Taxpayers were forced to bailout the insurance fund for thrifts, and the insurance fund for banks had a negative balance for the first time in its history. This situation prompted concern and considerable debate about the need to reform federal deposit insurance and regulatory oversight. In response, Congress passed the Federal Deposit Insurance Corporation Improvement Act of 1991 (FDICIA) to, among other things, improve the supervision and examination of depository institutions and to protect the federal deposit insurance funds from further losses. Among its various provisions, FDICIA added two new sections to the Federal Deposit Insurance Act of 1950—sections 38 and 39—referred to as the Prompt Regulatory Action provisions. The Prompt Regulatory Action provisions required federal regulators to institute a two-part system of regulatory actions that would be triggered when an institution fails to meet minimum capital levels or safety-and-soundness standards. Enactment of this two-part system was intended to increase the likelihood that regulators would respond promptly and forcefully to prevent or minimize losses to the deposit insurance funds from failures. The Federal Deposit Insurance Corporation (FDIC), Federal Reserve System (FRS), and two agencies within the Department of the Treasury—the Office of the Comptroller of the Currency (OCC) and the Office of Thrift Supervision (OTS)—share responsibility for regulating and supervising federally insured banks and thrifts in the United States. FDIC regulates state-chartered banks that are not members of FRS; FRS regulates state-chartered, member banks; OCC regulates nationally chartered banks; and OTS regulates all federally insured thrifts, regardless of charter type. The regulators carry out their oversight responsibilities primarily through monitoring data filed by institutions, conducting periodic on-site examinations, and taking actions to enforce federal safety-and-soundness laws and regulations. From 1980 to 1990, record losses absorbed by the federal deposit insurance funds highlighted the need for a new approach in federal regulatory oversight. Sharply mounting thrift losses over the decade bankrupted the Federal Savings and Loan Insurance Corporation (FSLIC), which was the agency responsible for insuring thrifts until 1989, despite a doubling of premiums and a special $10.8 billion recapitalization program. During this period, a record 1,020 thrifts failed at a cost of about $100 billion to the deposit insurance funds for thrifts. Banks also failed at record rates. From 1980 to 1990, a total of 1,256 federally insured banks were closed or received FDIC financial assistance. Estimated losses to the bank insurance fund for resolving these banks was about $25 billion. These losses resulted in the bank insurance fund’s incurring annual net losses in 1988, 1989, and 1990 that jeopardized the fund’s solvency for the first time since FDIC’s inception. Industry analysts have recognized many factors as contributing to the high level of thrift failures from 1980 to 1990. For example, thrifts faced increased competition from nondepository institutions, such as money-market funds and mortgage banks, as well as periods of inflation, recession, and fluctuating interest rates during that period. High interest rates and increased competition for deposits during the decade also created a mismatch between interest revenues from the fixed rate mortgages that constituted the bulk of the thrift industry’s assets and the cost of borrowing funds in the marketplace. Increased powers granted to thrifts in a period during which supervision did not keep pace has also been cited by some analysts, including us, as contributing to the problems of the industry. Regulators and industry analysts have associated a number of factors with the problems of banks during the 1980s. First, banks suffered losses resulting from credit risk—risk of default on loans—in an environment of prolonged economic expansion and increasingly volatile interest rates. The decade began with crises in agricultural loans and loans to developing nations. Next, unrepaid energy loans took a toll and led to the downfall of several major banks, including Continental Illinois in Chicago and First RepublicBank in Texas. As the decade came to a close, highly leveraged transactions and the collapse of commercial real estate markets, in which banks had been heavy lenders, depleted the capital structures of some major East Coast and West Coast banks and led to their failures. One factor we and others cited as contributing to the problems of both thrifts and banks during this period was excessive forbearance by federal regulators. Regulators had wide discretion in choosing the severity and timing of enforcement actions that they took against depository institutions with unsafe and unsound practices. In addition, regulators had a common philosophy of trying to work informally and cooperatively with troubled institutions. In a 1991 report, we found that this approach, in combination with regulators’ wide discretion in the oversight of financial institutions, had resulted in enforcement actions that were neither timely nor forceful enough to (1) correct unsafe and unsound banking practices or (2) prevent or minimize losses to the insurance funds. Regulators themselves recognized that their supervisory practices in the 1980s failed to adequately control risky practices that led to the numerous thrift and bank failures. Congress passed two major laws to address the thrift and bank crisis of the 1980s. The first, the Financial Institutions Reform, Recovery, and Enforcement Act of 1989 (FIRREA), was enacted primarily in response to the immediate problems surrounding FSLIC’s bankruptcy and troubles in the thrift industry. FIRREA created a new regulator for the thrift industry, OTS, and a new insurance fund, the Savings Association Insurance Fund (SAIF), to replace the bankrupt FSLIC. In addition, FIRREA increased the enforcement authority of both bank and thrift regulators. For example, FIRREA expanded the circumstances under which regulators could assess civil money penalties and increased the maximum penalty to $1 million per day. FIRREA also authorized FDIC to terminate a bank’s or thrift’s deposit insurance on the basis of unsafe and unsound conditions. The second major piece of legislation, FDICIA, contains several provisions that were intended to collectively improve the supervision of federally insured depository institutions. Specifically, FDICIA requires a number of corporate governance and accounting reforms to (1) strengthen the corporate governance of depository institutions, (2) improve the financial reporting of depository institutions, and (3) help in the early identification of emerging safety-and-soundness problems in depository institutions. In addition, FDICIA contains provisions that were intended to improve how regulators supervise depository institutions. Among the corporate governance and accounting reforms, FDICIA establishes generally accepted accounting principles as the standard for all reports and statements filed with the regulators. FDICIA also requires the management and auditors of depository institutions to annually report on their financial condition and management. The report is to include management’s assessment of (1) the effectiveness of the institution’s internal controls and (2) the institution’s compliance with designated laws and regulations. In addition, FDICIA requires the institution’s external auditors to report separately on management’s assertions. Furthermore, FDICIA requires the institutions to have an independent audit committee composed of outside independent directors. Among the supervision provisions, FDICIA requires regulators to perform annual on-site examinations of insured banks and thrifts (an 18-month cycle was allowed for qualified smaller institutions with assets of less than $100 million). FDICIA’s sections 131 and 132 added two new sections to the Federal Deposit Insurance Act (sections 38 and 39) that require the implementation of a “trip wire” approach to increase the likelihood that regulators will address the problems of troubled institutions at an early stage to prevent or minimize loss to the insurance funds. Section 38 creates a capital-based framework for bank and thrift oversight that is based on the placement of financial institutions into one of five capital categories. Capital was made the centerpiece of the framework because it represents funds invested by an institution’s owners, such as common and preferred stock, that can be used to absorb unexpected losses before the institution becomes insolvent. Thus, capital was seen as serving a vital role as a buffer between bank losses and the deposit insurance system. Although section 38 does not in any way limit regulators’ ability to take additional supervisory action, it requires federal regulators to take specific actions against banks and thrifts that have capital levels below minimum standards. The specified regulatory actions are made increasingly severe as an institution’s capital drops to lower levels. Section 38 requires regulators to establish criteria for classifying depository institutions into the following five capital categories: well-capitalized, adequately capitalized, undercapitalized, significantly undercapitalized, and critically undercapitalized. The section does not place restrictions on institutions that meet or exceed the minimum capital standards—that is, those that are well- or adequately capitalized—other than prohibiting the institutions from paying dividends or management fees that would drop them into the undercapitalized category. A depository institution that fails to meet minimum capital levels faces several mandatory restrictions or actions under section 38. The mandatory actions are intended to ensure a swift regulatory response that would prevent further erosion of an institution’s capital. Specifically, section 38 requires an undercapitalized institution to submit a capital restoration plan detailing, among other things, how the institution is going to become adequately capitalized; restrict its asset growth during any quarter so that its average total assets for the quarter do not exceed the preceding quarter’s average total assets, unless certain conditions are met; and receive prior regulatory approval for acquisitions, branching, and new lines of business. Section 38 allows regulators to take additional actions against an undercapitalized institution, if deemed necessary. It also requires regulators to closely monitor the institution’s condition and its compliance with section 38’s requirements. Section 38 requires regulators to take more forceful corrective measures when institutions become significantly undercapitalized. Regulators must take 1 or more of 10 specified actions, including (1) requiring the sale of equity or debt or, under certain circumstances, requiring institutions to be acquired by or merged with another institution; (2) restricting otherwise allowable transactions with affiliates; and (3) restricting the interest rates paid on deposits by the institution. Each of these three steps is to be mandatory unless the regulator determines that taking such steps would not further the purpose of section 38, which is to resolve the problems of insured depository institutions at the least possible long-term loss to the insurance fund. Other specific actions available to the regulators include imposing more stringent asset growth limitations than required for undercapitalized institutions or requiring the institution to reduce its total assets; requiring the institution, or its subsidiaries, to alter, reduce, or terminate an activity that the regulator determines poses excessive risk to the institution; improving management by (1) ordering a new election for the institution’s board of directors, (2) dismissing directors or senior executive officers, and/or (3) requiring an institution to employ qualified senior executive officers; prohibiting the acceptance, including renewal and rollover, of deposits requiring prior approval for capital distributions from holding companies having control of the institution; and requiring divestiture by (1) the institution of any subsidiary that the regulator determines poses a significant risk to the institution, (2) the parent company of any nondepository affiliate that regulators determine poses a significant risk to the institution, and/or (3) any controlling company of the institution if the regulator determines that divestiture would improve the institution’s financial condition and future prospects. Regulators can also require any other action that they determine would better resolve the problems of the institution with the least possible long-term loss to the insurance funds. Finally, section 38 prohibits significantly undercapitalized institutions from paying bonuses to or increasing the compensation of senior executive officers without prior regulatory approval. Section 38 requires more stringent action to be taken against critically undercapitalized institutions. After an institution becomes critically undercapitalized, regulators have a 90-day period in which they must either place the institution into receivership or conservatorship or take other action that would better prevent or minimize long-term losses to the insurance fund. In either case, regulators must obtain FDIC concurrence with their actions. Section 38 also prohibits critically undercapitalized depository institutions from doing any of the following without FDIC’s prior written approval: entering into any material transaction (such as investments, expansions, acquisitions, and asset sales), other than in the usual course of business; extending credit for any highly leveraged transaction; amending the institution’s charter or bylaws, except to the extent necessary to carry out any other requirement of any law, regulation, or order; making any material change in accounting methods; engaging in any covered transaction; paying excessive compensation or bonuses; or paying interest on new or renewed liabilities at a rate that would increase the institution’s weighted average cost of funds to a level significantly exceeding the prevailing rates of interest on insured deposits in the institution’s normal market area. In addition, section 38 prohibits a critically undercapitalized institution from making any payment of principal or interest on the institution’s subordinated debt beginning 60 days after becoming critically undercapitalized. Finally, section 38 permits regulators to, in effect, downgrade an institution by one capital level if regulators determine that the institution is in an unsafe and unsound condition or that it is engaging in an unsafe and unsound practice. For example, regulators can treat an adequately capitalized institution as undercapitalized if the institution received a less than satisfactory rating in its most recent examination report for asset quality, management, earnings, or liquidity. This downgrading would then allow regulators to require the institution’s compliance with those restrictions applicable to undercapitalized institutions, such as limits on the institution’s growth. Thus, section 38 allows regulators to take enforcement actions against an institution that presents a danger to the insurance fund by virtue of a factor other than its capital level. In addition to the specific provisions of section 38, another section of FDICIA provides FDIC with the authority to appoint a conservator or receiver for undercapitalized institutions that meet certain criteria. To limit deposit insurance losses caused by factors other than inadequate capital, section 39 directs each regulator to establish standards defining safety and soundness in three overall areas: (1) operations and management; (2) asset quality, earnings, and stock valuation; and (3) compensation. Section 39 originally made the safety-and-soundness standards applicable to both insured depository institutions and their holding companies, but the reference to holding companies was deleted in 1994. The section originally required regulators to prescribe safety-and-soundness standards through the use of regulations. For the operations and management standards, section 39 did not provide specific requirements other than requiring regulators to prescribe standards on internal controls, internal audit systems, loan documentation, credit underwriting, interest rate exposure, and asset growth. For asset quality, earnings, and—to the extent feasible—stock valuation, the section initially required regulators to establish quantitative standards. (See the next section for a discussion of amendments made to section 39’s original provisions.) Under compensation standards, regulators were to prescribe, among other things, standards specifying when compensation, fees, or benefits to executive officers, employees, directors, or principal shareholders would be considered excessive or could lead to material financial loss. Section 39 initially contained a number of provisions concerning the failure to meet the regulators’ prescribed safety-and-soundness standards. One key provision of the section directed regulators to require a corrective action plan from institutions or holding companies that fail to meet any of the standards. Such plans were to specify the steps an institution or a holding company was taking or intended to take to correct the deficiency. Section 39 directed the regulators to establish specific deadlines for submission and review of the plans. If an institution or a holding company failed to submit or implement the plan, regulators were mandated to issue an order requiring the institution or holding company to correct the deficiency and to take one or more of the following remedial actions as considered appropriate: restrict the institution’s or holding company’s asset growth, require the institution or holding company to increase its ratio of tangible restrict interest rates paid on deposits, and/or require the institution or holding company to take any other action that the regulator determines would prevent or minimize losses to the insurance fund. Section 39 also initially required regulators to take at least one of the first three previously mentioned remedial actions against institutions that (1) fail to meet any of the operational and/or asset quality standards listed in FDICIA, (2) have not corrected the deficiency, and (3) either commenced operations or experienced a change in control within the preceding 24 months or experienced extraordinary growth during the prior 18 months of failing to meet the standards. The Riegle Community Development and Regulatory Improvement Act of 1994 (CDRI) was passed on September 23, 1994, and contains more than 50 provisions that were intended to reduce bank regulatory burden and paperwork requirements. Among its provisions, CDRI amended some of section 39’s requirements to provide regulators with greater flexibility and to respond to concerns that section 39 would subject depository institutions to undue “micromanagement” by the regulators. The CDRI amendments allow regulators to issue the standards in the form of guidelines instead of regulations. If guidelines are used, the amendments give the regulators the discretion to decide whether a corrective action plan will be required from institutions that are found not to be in compliance with the standards. Finally, the amendments eliminate the requirement that regulators issue quantitative standards for asset quality and earnings and exclude holding companies from the scope of the standards. CDRI did not change section 39’s original provisions regarding the content and review of any plan required as a result of noncompliance with section 39’s safety-and-soundness standards. Thus, regulators still are required to issue regulations governing the contents of the plan, time frames for the submission and review of the plans, and enforcement actions applicable to the failure to submit or implement a required plan. Since the passage of FDICIA in 1991, the financial condition of the bank and thrift industries has improved substantially. As shown in table 1.1, the net income of banks more than doubled between 1991 and 1995, reaching a record high of $48.8 billion in 1995. Table 1.1 also shows that the net income of thrifts grew dramatically in 1992 from the 1991 level, decreased slightly in 1993 and 1994, and grew to a record $7.6 billion in 1995. In the period from 1992 through 1995, the number of bank and thrift failures declined from their 1980 to 1990 levels. For example, 6 banks failed in 1995, compared with 169 bank failures in 1990. The low number of bank failures in recent years has allowed the bank insurance fund to rebuild its reserve level. After falling to a record low of negative $7 billion in 1991, the fund grew to over $25 billion in 1995. The recapitalization of the bank insurance fund allowed FDIC to reduce the deposit insurance assessment rate paid by commercial banks twice in the latter part of 1995. As a result, commercial banks are paying the lowest average assessment rate in history. Despite the improved performance of the thrift industry, the thrift insurance fund remained undercapitalized as of December 1995. FDICIA required FDIC to increase the bank and thrift insurance funds’ reserve balances to at least 1.25 percent of the estimated insured deposits of insured institutions within 15 years of enactment of a recapitalization schedule. FDIC achieved this reserve ratio for the bank insurance fund on May 31, 1995. However, SAIF is not expected to achieve its required reserve ratio until 2002, according to FDIC. Thus, insurance fund premiums paid by thrifts remain significantly higher than those paid by commercial banks. The principal objective of this review was to assess the progress and results of the federal regulators’ implementation of FDICIA’s Prompt Regulatory Action provisions. Specifically, we assessed (1) the efforts of federal regulators to implement sections 38 and 39 and (2) the impact of sections 38 and 39 on federal oversight of the bank and thrift industries. To assess the federal regulators’ efforts to implement sections 38 and 39, we compared the legislative provisions with the implementing regulations and guidelines developed and issued by the regulators. In addition, we asked for and reviewed additional guidance developed by OCC and FRS. We concentrated our assessment on OCC and FRS because the FDIC and Treasury Offices of the Inspector General (OIG), respectively, had performed similar reviews of FDIC’s and OTS’ implementation of section 38. To the extent possible, we used the results of the FDIC OIG effort to compare and contrast with the results of our review of OCC’s and FRS’ implementation of section 38. We did not include the Treasury OIG’s results because the OIG was in the process of finalizing its evaluation. However, the OIG reviews did not assess FDIC’s or OTS’ implementation of section 39. We also assessed OCC’s and FRS’ implementation of section 38 by analyzing the supervisory actions used on the 61 banks that were undercapitalized (including those that were significantly and critically undercapitalized) for section 38 purposes. We identified the 61 banks using financial data (call reports) obtained from FDIC for the quarters ending December 1992 through December 1994. In the case of OCC, we looked at all of the 52 undercapitalized banks that were located in OCC’s Western, Southwest, and Northeast districts. These data provided us with coverage of 68 percent of all OCC-regulated banks that were undercapitalized during that period. For FRS, we looked at all nine undercapitalized banks under the jurisdiction of FRS’ Atlanta, Dallas, and San Francisco district banks. Doing so resulted in a coverage of 56 percent of all FRS-regulated banks that were undercapitalized during that period. While our results are not projectable to all undercapitalized banks under OCC’s and FRS’ jurisdiction, our results are representative of the OCC and FRS locations that we visited. As part of our assessment of (1) OCC’s and FRS’ efforts to implement sections 38 and 39 and (2) the impact of the sections on regulatory oversight, we interviewed OCC and FRS officials in the previously mentioned locations as well as in Washington, D.C. We obtained the officials’ views on the legislative intent underlying sections 38 and 39 and the evolution of the final regulations and guidelines. We also had discussions with the officials about regulatory actions, both under their traditional enforcement and section 38 authority, taken against the 61 banks that we reviewed. Additionally, we interviewed FDIC and OTS officials to obtain information on the interagency process used to develop the safety-and-soundness standards required to implement section 39. To assess the impact of sections 38 and 39 on the regulatory oversight of banks and thrifts, we used the 61 banks that we determined were undercapitalized for section 38 purposes to evaluate OCC’s and FRS’ use of their section 38 authority (reclassification and directives) versus the use of traditional enforcement tools. In addition, we reviewed OCC’s and FRS’ internal guidance and policies regarding the use of section 38 versus their other enforcement tools. We also obtained and analyzed information on the number of banks that the regulators had determined were undercapitalized for section 38 purposes versus the number of banks they had identified as being “problem” banks. We analyzed various articles and economic literature issued on (1) the impact of sections 38 and 39 on the regulatory process and (2) the implications of a capital-based regulatory approach in general. Additionally, we used OIG and our prior report results and recommendations to assess the content of the implementing regulations and guidelines as well as the likely impact of section 38 on the regulatory process. We did our work from November 1994 to September 1996 in accordance with generally accepted government auditing standards. We provided a draft of this report to the Federal Reserve Board, the Comptroller of the Currency, the Federal Deposit Insurance Corporation, and the Office of Thrift Supervision for their review and comment. A summary of the agencies’ comments and our evaluation are included at the end of chapter 3. The agencies’ comment letters are reprinted in appendixes III to VI. Staff of OCC and FDIC also provided additional technical comments on the draft report, which were incorporated as appropriate. Regulators have taken steps to implement FDICIA’s Prompt Regulatory Action provisions. However, the financial condition of banks and thrifts has improved since the passage of FDICIA in 1991 because relatively few institutions have been considered undercapitalized under section 38 as of September 1996. Our review of a sample of 61 undercapitalized banks found that OCC and FRS have generally met section 38 requirements regarding the identification of undercapitalized institutions, the receipt and review of capital restoration plans, and the closure of critically undercapitalized institutions. Our finding was consistent with the FDIC OIG’s conclusions regarding FDIC’s implementation of section 38. All three regulators (OCC, FRS, and FDIC) had virtually no experience in using their section 38 reclassification authority and had used their section 38 authority to take enforcement actions on a relatively small number of institutions. As of September 1996, none of the regulators had used section 39 enforcement powers. All but two of the safety-and-soundness standards required for the implementation of section 39 became effective in August 1995. The remaining two standards—asset quality and earnings—became effective on October 1, 1996, allowing for the full implementation of section 39. The regulators explained that they missed the December 1993 statutory deadline for the implementation of section 39 due to (1) the complication of developing standards on an interagency basis, (2) the concern of ensuring that the standards did not unnecessarily add to the existing regulatory burden of depository institutions, and (3) the knowledge that Congress was considering amending section 39’s requirements governing the standards. Regulations issued by the four regulators to implement section 38 requirements are intended to ensure that prompt regulatory action is taken whenever an institution’s capital condition poses a threat to federal deposit insurance funds. Banks and thrifts have increased their capital levels since the passage of FDICIA so that relatively few financial institutions have been subject to section 38 regulatory actions in the 3 years that the regulations were in effect. Between December 1992—the effective date of the regulations—and December 1995, the number and total assets of institutions that were undercapitalized had decreased from about 2 percent in 1992 to less than one-quarter of 1 percent of all banks and thrifts by 1995. The regulators jointly developed the implementing regulations for section 38 and based the criteria for the five capital categories on international capital standards and section 38 provisions. The four regulators specifically based the benchmarks for an adequately capitalized institution on the Basle Committee requirement, which stipulates that an adequately capitalized international bank must have at least 8 percent total risk-based capital and 4 percent tier 1 capital. For the definition of a critically undercapitalized institution, the regulators adopted section 38’s requirement of a tangible equity ratio of at least 2 percent of total assets. The regulators based the criteria for the remaining three capital categories on these two benchmarks. As shown in table 2.1, three capital ratios are used to determine if an institution is well-capitalized, adequately capitalized, undercapitalized, or significantly undercapitalized. A well-capitalized or adequately capitalized institution must meet or exceed all three capital ratios for its capital category. To be deemed undercapitalized or significantly undercapitalized, an institution need only fall below one of the ratios listed for its capital category. Although not shown in the table, a fourth ratio—tangible equity—is used to categorize an institution as critically undercapitalized. Any institution that has a 2-percent or less tangible equity ratio is considered critically undercapitalized, regardless of its other capital ratios. So far, relatively few financial institutions have been categorized as undercapitalized and, thus, subject to section 38 regulatory actions. This situation was due, in part, to the improved financial condition of the bank and thrift industries. The implementation of section 38 also provided institutions with strong incentives to increase their capital levels to avoid the mandatory restrictions and supervisory actions associated with being undercapitalized. As shown in table 2.2, the number of financial institutions whose reported financial data indicated undercapitalization, based on section 38 implementing regulations, steadily declined between December 1992 and December 1995. The beginning of the decline coincided with the December 1992 implementation of section 38. Data reported by financial institutions indicated that 252 banks and thrifts, or about 2 percent of those institutions, were undercapitalized in December 1992, including those that were significantly and critically undercapitalized. As of December 1995, only 29 banks and thrifts, or about one-quarter of 1 percent of all banks and thrifts, fell into the undercapitalized categories. Our review of regulatory actions at 61 sample banks indicated that OCC and FRS complied with the basic requirements of section 38 and its implementing regulations. Specifically, OCC and FRS categorized the banks in accordance with section 38 criteria and notified undercapitalized banks of the restrictions and regulatory actions associated with their capital category. In addition, OCC and FRS typically obtained and reviewed the required capital restoration plans within the time frames specified by section 38. Moreover, the two regulators generally took action to close the critically undercapitalized banks as required by section 38. Both regulators had limited experience with issuing section 38 directives or using their reclassification authority. The FDIC OIG reported similar results regarding FDIC’s implementation of section 38. OCC and FRS correctly identified and categorized the 61 sampled banks using criteria specified in section 38 legislation and implementing regulations. While primarily relying on call reports, they also used the on-site examination process to identify undercapitalized banks. The regulators then sent notices to those banks to inform the banks of their undercapitalized status and the associated section 38 mandatory restrictions, requirements, and regulatory responses. In the jurisdictions of the offices that we visited, OCC and FRS identified a total of 61 banks as being undercapitalized at some point from December 1992 through December 1994. The two regulators identified 60 banks as undercapitalized on the basis of the call report data reported to the regulators on a quarterly basis. FRS identified an additional bank as being undercapitalized on the basis of the results of an on-site safety-and-soundness examination. Table 2.3 shows the distribution of the banks in our sample by regulator and section 38 capital category. OCC and FRS sent the required notices to the management of the 61 banks in our sample informing them of their banks’ undercapitalized status. The notification letters advised the banks of the mandatory requirements and restrictions associated with their section 38 capital category. For significantly and critically undercapitalized banks, the notification letters also pointed out the additional mandatory and discretionary regulatory responses or actions associated with their section 38 capital categorization. OCC and FRS generally met section 38 requirements governing capital restoration plans (CRP). Section 38 requires banks to prepare a CRP within 45 days of becoming undercapitalized and allows regulators 60 days to review the CRP. For the 61 banks that we reviewed, OCC and FRS were generally successful in getting banks to submit the plans on time and in meeting the required time frames for reviewing and approving or rejecting the plans. Section 38 provisions require that CRPs prepared by undercapitalized institutions contain certain elements. Specifically, the section requires that CRPs specify the steps that the institution will take to become adequately capitalized, the levels of capital the institution will attain during each year the plan will be in effect, how the institution will comply with the restrictions or requirements applicable to its undercapitalization capital category, and the types and levels of activities in which the institution will engage. Section 38 prohibits regulators from accepting a CRP unless it (1) contains the previously mentioned required elements, (2) is based on realistic assumptions and is otherwise likely to succeed, and (3) would not appreciably increase the institution’s riskiness. Holding companies are required to guarantee the institution’s compliance with the CRP and to provide adequate assurance of performance. Although the notification letters sent to the 61 undercapitalized banks in our review indicated that a CRP was required, only 44 banks submitted a CRP. Of the 17 banks that did not submit CRPs, 15 experienced conditions within the first few months of becoming undercapitalized that, according to the regulator, precluded the need for a CRP. Specifically, nine failed, two merged with other banks, one was voluntarily liquidated, and three became adequately capitalized. OCC chose not to pursue obtaining CRPs from the remaining two banks. In one case, OCC deferred its enforcement efforts pending the results of an ongoing investigation by the Federal Bureau of Investigation and local enforcement authorities into potential criminal activity by the bank’s management. In the second case, OCC issued a section 38 directive instead of formally enforcing the requirement that the bank submit a CRP to achieve corrective action in a more timely fashion. OCC and FRS were generally successful in getting the 44 institutions that submitted CRPs to meet the 45-day requirement. As shown in table 2.4, 10 banks exceeded the 45-day requirement, but most had submitted CRPs within 55 days. OCC and FRS were typically successful in meeting the 60-day time frame for reviewing the 44 CRPs submitted by the banks in our sample. As shown in table 2.5, the regulators met the 60-day requirement on all but one applicable case where data were available to make a determination. Of the 44 CRPs submitted by the banks that we looked at, OCC and FRS rejected 30 of the CRPs as inadequate and required those banks to revise and resubmit them. The regulators used the criteria specified in section 38 legislation to determine whether a CRP was acceptable. Ultimately, the regulators approved 29 of the CRPs submitted by the undercapitalized banks that we reviewed. Of the 15 banks whose CRPs were not approved, 10 ultimately failed. One of the 15 banks merged with another bank, and the remaining 4 banks obtained enough capital to eliminate the need for a CRP. As required by section 38, OCC and FRS have generally taken action to close critically undercapitalized banks within a specified time frame. Under section 38, regulators are required to close critically undercapitalized institutions within 90 days of the institutions’ becoming critically undercapitalized unless the regulator and FDIC concur that other actions would better protect the insurance funds from losses. As previously shown in table 2.3, there were 25 critically undercapitalized banks in our sample. OCC and FRS closed 17 of these banks because they were critically undercapitalized. Fifteen of the 17 banks were closed within the prescribed 90-day period. In the case of the two banks that were closed after the 90-day deadline had expired, regulators approved the delay to allow FDIC more preparation time for the orderly closure of the banks. For the remaining 8 critically undercapitalized banks in our sample, 1 merged and the other 7 improved their capital position above the critically undercapitalized level before the end of the 90-day period. From December 1992 to September 1996, OCC and FRS used their section 38 authority to initiate directives against 8 of the 61 banks in our sample. Section 38 requires regulators to take specific regulatory actions against significantly undercapitalized institutions and to make the use of these actions discretionary for other undercapitalized institutions. In those instances in which section 38 directives were used, both OCC and FRS complied with the governing requirements of section 38 legislation and implementing regulations. As previously discussed in chapter 1, section 38 mandates regulators to take at least 1 of 10 specified actions against significantly undercapitalized institutions. The section also provides regulators with discretionary authority to take any of the 10 specified actions that they consider appropriate against undercapitalized institutions. OCC used directives against a relatively small number of the banks in our sample. Of the 52 OCC-regulated banks we reviewed, 16 were significantly undercapitalized at some time between December 1992 and December 1994, according to their call report data. Thus, unless the status of the banks changed, OCC would have been expected to have initiated a directive against the 16 banks to take the enforcement actions mandated by section 38. However, OCC only initiated directives against five of these banks. Seven of the remaining 11 banks either failed, merged, or improved their capital status within 90 days of becoming significantly undercapitalized, thus eliminating the need for OCC to issue a directive. OCC officials told us that directives were not initiated against the remaining four significantly undercapitalized banks because they were already subject to formal enforcement actions that OCC believed were similar to those that would be covered by directives. Thus, initiating a directive would have duplicated the existing, ongoing enforcement actions. FRS initiated directives against three of the seven FRS-regulated banks in our sample that were categorized as significantly undercapitalized at some point between December 1992 and December 1994. According to FRS, the need for it to issue directives was precluded for three significantly undercapitalized banks because they improved their capital status, merged with another institution, or were voluntarily liquidated shortly after becoming significantly undercapitalized. FRS did not initiate a directive against the remaining significantly undercapitalized bank because the applicable corrective actions were already under way in connection with existing federal and state enforcement actions and in connection with the bank’s CRP. From December 1992 to September 1996, OCC and FRS used their reclassification authority in two instances. Section 38 authorizes bank regulators under certain circumstances to downgrade, or treat as if downgraded, an institution’s capital category if (1) it is in an unsafe or unsound condition or (2) it is deemed by the regulator to be engaging in an unsafe or unsound practice. Reclassifying an institution to the next lower capital category allows regulators to subject the institution to more stringent restrictions and sanctions. According to OCC officials, OCC would use its section 38 reclassification authority only if its traditional enforcement actions had not been successful in correcting a bank’s problems. OCC officials told us that they prefer to use their traditional enforcement authority for several reasons. One reason was the broader range of options that OCC’s traditional enforcement actions provide both in the areas covered by the enforcement action as well as in the degree of severity of the action. Another reason that OCC prefers to use its traditional enforcement actions is the bilateral nature of these actions. According to OCC officials, traditional enforcement actions, such as a formal written agreement between the regulator and an institution, may achieve greater acceptance by the institution for taking corrective action than the unilateral nature of section 38 reclassifications and/or directives. However, OCC officials said that reclassification under section 38 can sometimes allow them to initiate certain actions faster (i.e., through directives) than would be possible using their traditional enforcement actions. In the one case involving OCC reclassification, the agency reclassified a bank from adequately capitalized to undercapitalized because (1) OCC believed the bank was operating in an unsafe and unsound condition that would impair its capital levels and (2) the bank had not complied with earlier OCC enforcement actions. The reclassification allowed OCC to initiate a directive that, among other requirements, mandated the dismissal of a senior bank official and a director who OCC believed were responsible for the bank’s deteriorated condition. Despite OCC’s use of its reclassification authority and a section 38 directive, the bank’s condition deteriorated further until it failed 8 months later. FRS has an internal policy that requires all problem banks, which it defined as banks with a composite rating of 4 or 5, to be considered operating in an unsafe and unsound condition and, thus, candidates for reclassification. Between December 1992 and December 1994, 58 banks had a FRS-assigned composite rating of 4 or 5. In its only use of its reclassification authority, FRS reclassified a well-capitalized bank to adequately capitalized because of continuous deterioration in the bank’s asset quality, earnings, and liquidity. This bank’s capital levels subsequently deteriorated to the point where it was considered significantly undercapitalized. The bank has since improved its capital to the well-capitalized category and is no longer considered to be a problem institution by FRS. In September 1994, the FDIC OIG reported that FDIC had generally complied with the provisions of section 38 and its implementing regulations. Table 2.6 compares the three regulators’ implementation of specific section 38 provisions. As of September 1996, regulators had not used their section 39 enforcement authority against an institution. In July 1995, regulators issued final guidelines and regulations to implement parts of section 39. Specifically, the regulators issued standards governing operations and management and compensation. They also issued requirements for submission and review of compliance plans. The regulators issued the remaining standards required for the full implementation of section 39—asset quality and earnings—in August 1996. FDICIA had established a deadline of December 1, 1993, for the implementation of section 39. Regulators said they were unable to meet that deadline because of (1) the difficulty of jointly developing the standards, (2) the concerns of regulators and financial institutions that the implementation of section 39 could increase existing regulatory burden for banks and thrifts, and (3) the knowledge that Congress was considering amending the section 39 requirements to provide regulators with greater flexibility and discretion in their implementation of the section. According to the regulators, developing and issuing safety-and-soundness standards was complicated by the interagency process and by concerns about the potential regulatory burden associated with the standards. Unlike the process for promulgating capital standards under section 38, which used the Basle Accord as a reference point, the regulators had no generally accepted standards to use as the basis for the safety-and-soundness standards. In addition, the regulators told us that the legislative history for section 39 did not provide specific guidance on the standards envisioned by Congress. Furthermore, the regulators wanted to ensure that the section 39 standards did not increase the bank and thrift industries’ regulatory burden without a corresponding benefit to the federal deposit insurance funds and taxpayers. OCC and FRS officials said that the lack of generally agreed upon standards for the areas covered by section 39 contributed to delays in developing and issuing the section’s standards. They explained that regulators consider numerous variables in assessing an institution’s safety and soundness. As a result, developing standards on an interagency basis for areas such as internal controls and interest rate exposure was difficult. According to the officials, the various regulators had different viewpoints as to how specific or general the standards should be. On July 15, 1992, the regulators issued a joint solicitation of comments on the section 39 safety-and-soundness standards. In soliciting the views of the banking industry on the form and content of the standards, the regulators said that they were concerned with “establishing unrealistic and overly burdensome standards that unnecessarily raise costs within the regulated community.” The four regulators collectively received over 400 comment letters, primarily from banks and thrifts. According to the regulators, the comments strongly favored adopting general standards, rather than specific standards, to avoid regulatory “micromanagement.” The regulators considered the public comments in developing the proposed standards that were published on November 18, 1993. The regulators proposed standards for the following three areas required by section 39: (1) operations and management, (2) asset quality and earnings, and (3) compensation. According to the notice of proposed rulemaking, regulators proposed general standards, rather than detailed or quantitative standards, to “avoid dictating how institutions are to be managed and operated.” However, as required by section 39 before its amendment in 1994, the regulators proposed two quantitative standards—a maximum ratio of classified assets-to-capital and a formula to determine minimum earnings sufficient to absorb losses without impairing capital. Section 39 also required the regulators to set, if feasible, a minimum ratio of market-to-book value for publicly traded shares of insured institutions as a third quantitative standard. The regulators determined that issuing such a standard was technically feasible, but they concluded that it was not a reasonable means of achieving the objectives of the Prompt Regulatory Action provisions. The regulators explained that an institution’s stock value can be affected by factors that are not necessarily indicative of an institution’s condition, such as the performance of the general stock market and industry conditions. As a result, the regulators believed that a market-to-book value ratio would not be an operationally reliable indicator of safety and soundness. Therefore, the regulators ultimately decided against proposing a market-to-book value ratio as a third quantitative standard. The proposed regulations also described procedures for supervisory actions that were consistent with those contained in the section 39 legislation for institutions failing to comply with standards. Specifically, the proposed regulations required institutions to prepare and submit a compliance plan within 30 days of being notified by the regulator of their noncompliance. The plan was to include a description of the steps the institution intended to take to correct the deficiency. Regulators would then have 30 days to review the plan. In addition, the proposed regulations specified enforcement actions regulators would take if an institution failed to submit an acceptable compliance plan or failed to implement the plan. The regulators collectively received 133 comment letters, primarily from financial institutions, in response to the November 18, 1993, notice of proposed rulemaking. According to the four regulators, those who commented generally found the agencies’ proposed standards, including the two quantitative standards, acceptable. However, some of those who commented criticized the proposed quantitative standards as inflexible and overly simplistic. OCC and FRS officials attributed further delays in implementing section 39 to their knowledge that in the period from late 1993 to mid-1994, Congress was considering legislation that would amend section 39’s requirements. Congress was considering amending section 39 to reduce the administrative requirements for insured depository institutions consistent with safe-and-sound banking practices. After CDRI was passed in September 1994, regulators needed additional time to revise the standards they proposed in November 1993 to take advantage of the additional flexibility provided by the section 39 amendments. On July 10, 1995, the regulators published final and proposed guidelines and regulations to implement section 39, as amended. The final guidelines covered operational and managerial standards, including internal controls, information systems, internal audit systems, loan documentation, credit underwriting, interest rate exposure, and asset growth, and compensation standards. The final guidelines were effective in August 1995. Along with the final guidelines, regulators proposed new standards for asset quality and earnings. The final standards for asset quality and earnings were issued on August 27, 1996, with an effective date of October 1, 1996. The final standards contained in the guidelines are less prescriptive on the institutions than those proposed in November 1993. For example, under internal controls and information systems, the guidelines specified that the “institution should have internal controls and information systems, that are appropriate to the size of the bank and the nature and scope of its activities.” In addition, the regulators used the additional flexibility provided by CDRI to eliminate the two previously proposed quantitative standards for classified assets and earnings. According to the regulators, the use of general rather than specific standards was supported by the overwhelming number of commenters responding to the regulators’ request for comments on the section 39 safety-and-soundness standards. Moreover, the use of guidelines instead of regulations gives the regulators flexibility in deciding whether to require a compliance plan from an institution found to be in noncompliance with the standards. The regulators issued regulations addressing the (1) required content of compliance plans, (2) time frames governing the preparation and review of a plan, and (3) regulatory actions applicable to the failure to submit or comply with a plan. The compliance plan regulations were issued jointly on July 10, 1995, with the section 39 guidelines governing the operational, managerial, and compensation standards. Both the guidelines and regulations became effective in August 1995. FDICIA’s Prompt Regulatory Action provisions granted additional enforcement tools to regulators and provided more consistency in the treatment of capital-deficient institutions. However, sections 38 and 39, as implemented, raise questions about whether regulators will act early and forcefully enough to prevent or minimize losses to the insurance funds. Section 38 does not require regulators to take action until an institution’s capital drops below the adequately capitalized level. However, depository institutions typically experience problems in other areas, such as asset quality and management, long before these problems result in impaired capital levels. Moreover, regulators have wide discretion governing the application of section 39 because the guidelines and regulations implementing section 39, as amended, do not (1) establish clear and specific definitions of unsound conditions and practices or (2) link such conditions or practices to specific mandatory regulatory actions. Other initiatives that have been undertaken as a result of FDICIA, as well as the regulators’ recognition of the need to be more proactive in preventing unsafe and unsound practices, may help increase the likelihood that sections 38 and 39 will be used to provide prompt and corrective regulatory action. FDICIA’s corporate governance and accounting reform provisions were designed to improve management accountability and facilitate early warning of safety-and-soundness problems. In addition, FDICIA requires regulators to revise the risk-based capital standards to ensure that reported capital accurately reflected the institution’s risk of operations. Regulators have also announced new initiatives to improve monitoring and control of bank risk-taking, but these initiatives have not been fully implemented or tested. The success of these initiatives, coupled with the regulators’ willingness to use their various enforcement authorities, including sections 38 and 39, will be instrumental in determining whether losses to the insurance funds are prevented or minimized in the future. Available evidence suggests that the implementation of the section 38 capital standards between 1992 and 1995, along with other factors, has benefited the bank and thrift industries and may have helped improve federal oversight. Specifically, the section 38 standards (1) provide financial institutions with incentives to raise equity capital, (2) should help regulators prevent seriously troubled institutions from taking actions that could compound their losses, and (3) should help ensure more timely closure of near-insolvent institutions. In addition, regulatory officials have stated that section 38 serves as an important supplemental enforcement tool. According to the regulators and banking industry analysts, section 38 provides depository institutions with strong incentives to raise additional equity capital. These officials explained that financial institutions were concerned about the potential ramifications of becoming undercapitalized, and the institutions raised additional equity capital to avoid potential sanctions. Once the implementing regulations were issued, depository institutions had clear benchmarks as to the levels of capital they needed to achieve to avoid mandatory regulatory intervention. Since the implementation of section 38, thanks in part to record industry profits, the capital levels of banks and thrifts have reached their highest levels since the 1960s. Another benefit of the section 38 capital standards is that they should help prevent certain practices and conditions that rapidly eroded the capital of troubled institutions from 1980 to 1990 and contributed to deposit insurance fund losses. For example, section 38 standards impose growth restrictions to prevent undercapitalized and significantly undercapitalized institutions from trying to “grow” their way out of financial difficulty. As a result, it should be more difficult for these institutions to rapidly expand their asset portfolios and increase potential insurance fund losses, as many thrifts did during the 1980s. Section 38 also requires regulators to prohibit undercapitalized institutions from depleting their remaining capital by paying dividends. OCC and FRS officials told us that another benefit of section 38 is the mandatory closure rule for critically undercapitalized institutions. These officials explained that before the implementation of section 38, regulators typically waited until an institution had 0-percent equity capital before closing it as insolvent. The officials also said that under section 38, they now have a clear legal mandate for closing problem institutions at 2-percent tangible equity capital, which should provide the insurance funds with a greater cushion against losses. Regulatory officials we contacted also said that section 38 serves as a useful supplement to their traditional enforcement authority. For example, OCC officials said that section 38 directives allow for the prompt removal of bank officials when the agency believes such officials are responsible for the bank’s financial and operational deterioration. OCC officials said that before FDICIA, removing such individuals took longer, sometimes up to several months. Although the capital-based regulatory approach strengthens federal oversight in several ways, by itself it has significant limitations as a mechanism to provide early intervention to safeguard the insurance funds. Capital is a lagging indicator of a financial institution’s deterioration. Troubled institutions may already have irreversible financial and operational problems that would inevitably result in substantial insurance fund losses by the time their capital deteriorates to the point where mandatory enforcement actions are triggered under section 38. In addition, troubled institutions often fail to report accurate information on their true financial conditions. As a result, many troubled institutions that have serious safety-and-soundness problems may not be subject to section 38 regulatory actions. Capital has been a traditional focus for regulatory oversight because it is a reasonably obvious and accepted measure of financial health. However, our work over the years has shown that, although capital is an important focus for oversight, it does not typically begin to decline until an institution has experienced substantial deterioration in other components of its operations and finances. It is not unusual for an institution’s internal controls, asset quality, and earnings to deteriorate for months, or even years, before conditions require that capital be used to absorb losses. As a result, regulatory actions, such as requirements for capital restoration plans or growth limits, may have only marginal effects because of the extent of deterioration that may have already occurred. Relating regulatory actions to capital alone has another inherent limitation in that reported capital levels do not always accurately reflect troubled institutions’ actual financial conditions. Troubled institutions have little incentive to report the true level of problem assets or to establish adequate reserves for potential losses. As a result, some institutions’ reported capital levels were often artificially high. The reporting of inaccurate capital levels was evident from 1980 to 1990 as many of the troubled institutions, which reported some level of capital before failing, ultimately generated substantial losses to the insurance fund. Thus, capital-driven regulatory responses would likely have had limited effectiveness since the institutions were already functionally insolvent. As illustrated by the following example, troubled institutions’ reported capital levels can plummet rapidly in times of economic downturn. In the 1980s, many New England banks, with average equity capital ratio levels exceeding the regulatory minimum requirements then in existence, were engaged in aggressive high-risk commercial real estate lending. These banks frequently ignored basic risk diversification principles by committing a substantial percentage of their lending portfolios to construction, multifamily housing, and commercial real estate lending—in some cases as high as 50 percent. This practice tied their future financial health to those industries. When the New England economy fell into recession in the late 1980s and early 1990s, many of the poorly managed banks in the region experienced a deterioration in their asset quality, earnings, and liquidity well before their capital levels declined. For example, once regulators recognized the recession’s effect on the Bank of New England portfolios, examiners required the bank to adversely classify an increasing number of loans—especially commercial real estate loans whose repayment was questionable due to the economic downturn. As the level of classified loans increased, the examiners required the Bank of New England to establish reserves for potential loan losses, which reduced the bank’s earnings. Subsequently, the bank suffered continued earnings deterioration and had to use its capital to absorb those losses. The Bank of New England’s managers and regulators had few options for maintaining solvency and, ultimately, for minimizing insurance fund losses. The available options included reducing the institution’s inventory of classified loans by selling assets, raising capital through public offerings, or selling the institution to a healthy buyer. The managers’ and regulator’s ability to carry out these strategies was constrained by the region’s economic downturn, since few investors were willing to purchase the assets of problem banks or to inject new capital into them without some form of financial assistance from FDIC. Ultimately, the bank failed, resulting in a loss to the bank insurance fund of $841 million. Other failed banks in the New England area followed a similar pattern, resulting in substantial losses to the insurance fund. Another reason that section 38, used alone, is a limited mechanism to protect deposit insurance funds, is that most troubled institutions do not fall into undercapitalized categories, including some that ultimately fail. Consequently, regulators overseeing even the most troubled institutions generally would not be compelled to initiate mandatory enforcement actions under section 38. We reviewed data compiled by FDIC that showed that many severely troubled institutions in the period from December 1992 to December 1995 did not fall into section 38’s undercapitalized categories. Therefore, these institutions were not subject to the section’s mandatory enforcement actions. On a quarterly basis, FDIC reports on the number of “problem” institutions. These institutions have regulator-assigned composite ratings of 4 or 5 because they typically have severe asset quality, liquidity, and earnings problems that make them potential candidates for failure. These institutions are also typically subject to more intensive oversight, including more frequent examinations by regulators and more frequent required reporting by the institutions on their financial conditions. As of December 31, 1995, 193 banks and thrifts were on FDIC’s problem institution list. However, only 29 institutions were categorized as undercapitalized under section 38 criteria. We made similar comparisons for 1992 through 1995 and found that only 15 to 24 percent of the problem institutions were categorized as undercapitalized under section 38 criteria (see table 3.1). Moreover, a recent study assessed the effectiveness of the current section 38 capital standards in identifying problem institutions and mandating enforcement actions by applying the section 38 standards to the troubled banks of an earlier period. The study concluded that the majority of banks that experienced financial problems between 1984 and 1989 would not have been subject to the capital-based enforcement actions of section 38, if they had been in effect. For example, the study found that 54 percent of the banks that failed within the subsequent 2 years would have been considered to be well- or adequately capitalized between 1984 and 1989. Thus, even if the section 38 standards had been in place in the 1980s, these troubled banks would not have been subject to section 38’s mandatory restrictions and supervisory actions. The study attributed the limitations that the current section 38 standards have in identifying troubled financial institutions to weaknesses in the risk-based capital ratio used by the regulators. Specifically, the study stated that the risk-based ratio does not (1) account for the fact that many banks do not adequately reserve for potential loan losses or (2) assign an adequate risk weight to cover the level of adversely classified assets that a bank may have on its books. Although the regulators are in the process of revising the risk-based capital standards, the revisions announced as of September 1996 do not address the two previously mentioned factors. The regulators’ efforts to revise the risk-based capital standards are discussed later in this chapter and in appendix I. The 1994 failure of one of the banks reviewed by the Treasury OIG, Mechanics National Bank of Paramount, California, illustrated some of the limitations of section 38 capital standards. The Treasury OIG found that despite OCC’s aggressive use of section 38 enforcement actions, OCC did not reverse the bank’s decline or prevent material loss to the bank insurance fund. The bank’s failure also demonstrated that severely troubled banks may not be subject to section 38’s restrictions and mandatory enforcement actions for a substantial period. According to the Treasury OIG report, the Mechanics National Bank pursued an aggressive growth strategy between 1988 and 1991 that contributed substantially to its failure. The bank concentrated its loan portfolio in risky service station loans and speculative construction and development projects. Under a Small Business Administration lending program, the bank also developed a significant portfolio of loans that was poorly underwritten and inadequately documented. In 1990, a downturn in the California economy generated a substantial deterioration in the bank’s loan portfolio. In 1991, OCC issued a cease-and-desist order against the bank that required substantial improvements in the bank’s operations and financial condition. Despite the cease-and-desist order, the bank’s asset quality and earnings continued to deteriorate over the next several years. The Treasury OIG report said that when section 38 capital standards became effective in December 1992, the Mechanics National Bank had a ratio of classified assets-to-capital of about 309 percent and had experienced losses of $4.3 million during 1992. OCC had just completed an examination of the bank in December 1992, which concluded that the bank was likely to fail. At that time, despite apparent asset quality and earnings problems, the bank’s capital had not deteriorated to the point where it was undercapitalized according to section 38 criteria. The bank’s capital ratios fell within the adequately capitalized category. The bank continued to be categorized as adequately capitalized during the first and second quarters of 1993, despite its high levels of classified assets and mounting losses. In July 1993, OCC reclassified the bank to the undercapitalized level. On January 10, 1994, OCC notified the bank that it was critically undercapitalized because its total capital-to-asset ratio had fallen below 2 percent. The regulators closed the bank in April 1994. Although the Treasury OIG report criticized OCC’s supervision and enforcement activities for the period between 1988 and 1991, the report found that the agency’s use of section 38 enforcement authority during 1993 and 1994 was appropriate. For example, the OIG report highlighted OCC’s use of its section 38 reclassification authority to remove two Mechanics National Bank officers who were thought to be largely responsible for the bank’s problems. OCC also used its section 38 authority to close the bank on April 1, 1994, within 90 days of the notification of its critically undercapitalized status. Nevertheless, OCC’s enforcement actions under section 38 were largely ineffective in minimizing the losses that were already embedded in the bank’s loan portfolio before it fell to the undercapitalized level. The bank’s estimated loss to the insurance fund of $37 million represented 22 percent of the bank’s total assets of $167 million. The impact of section 38’s implementation on minimizing losses to the insurance funds is difficult to assess. Between 1985 and 1989, losses to the bank insurance fund ranged from approximately 12 to 23 percent of the assets of failed banks with a 5-year weighted average of about 16 percent. As we reported in 1991, this high rate of losses indicated that regulators were not (1) taking forceful actions that effectively prevented dissipation of assets or (2) closing institutions when they still had some residual value. There have been some signs of improvement since the 1985-to-1989 period as illustrated in table 3.2. During the first 2 full years that section 38 was in effect, 1993 and 1994, the rates of loss were 17 and 10 percent, respectively, for a weighted average of 15 percent. While these loss rates are still significant, it is too early to assess section 38’s long-term effectiveness in reducing losses to the insurance funds compared with preceding years. However, it does suggest that the implementation of section 38 alone is likely to provide only limited assurance that bank failures will not have significant effects on the insurance funds. As discussed in chapter 2, the full implementation of section 39 began on October 1, 1996. However, the guidelines and regulations developed by regulators to implement section 39 do little to reduce the degree of discretion regulators exercised from 1980 to 1990. In particular, the safety-and-soundness standards contained in the guidelines are general in nature and do not identify specific unsafe or unsound conditions and practices even though the regulators have already established measures that could have served as a basis for more specific requirements. Moreover, the guidelines and regulations do not require regulators to take corrective action against institutions that do not meet the standards for safety and soundness. In two 1991 reports, we recommended that Congress and regulators develop a formal, regulatory trip wire system that would require prompt and forceful regulatory action tied to specific unsafe banking practices. The trip wire system we envisioned would have been specific enough to provide clear guidance about what actions should be taken to address specified unsafe banking practices and when the actions should be taken. The intent was to increase the likelihood that regulators would take forceful action to stop risky practices before the capital of the bank begins to fall and it is too late to do much about the condition of the bank or insurance fund losses. The trip wire system was also to consist of objective criteria defining conditions that would trigger regulatory action. In contrast, the safety-and-soundness standards, contained in the guidelines developed to implement section 39, as amended, consist of broad statements of sound banking principles that are subject to considerable interpretation by the regulators. For example, the standards for asset quality state that the institution should establish and maintain a system to identify problem assets and prevent deterioration of those assets in a manner commensurate with its size and the nature and scope of its operations. Specifically, the guidelines direct institutions to do the following: conduct periodic asset quality reviews to identify problem assets and estimate the inherent losses of those assets, compare problem asset totals to capital and establish reserves that are sufficient to absorb estimated losses, take appropriate corrective action to resolve problem assets, consider the size and potential risks of material asset concentrations, and provide periodic asset reports containing adequate information for management and the board of directors to assess the level of asset risk. Although the asset quality standards identify general controls and processes the regulators expect institutions to have, the standards do not provide specific, measurable criteria of unsafe conditions or practices that would trigger mandatory enforcement actions. In our 1991 report on deposit insurance reform, we suggested that the classified assets-to-capital ratio could serve as an objective criterion because the ratio is routinely used by bank examiners to identify deteriorating asset quality. For example, we reported that the regulators become increasingly concerned when a bank’s classified assets-to-capital ratio increased to 50 percent or more. Similarly, during the interagency process used to develop the section 39 safety-and-soundness standards, FRS had proposed that the regulators take mandatory enforcement actions when a bank’s classified assets-to-capital ratio reached 75 to 100 percent. However, the regulators decided not to include this requirement after CDRI provided them with the option of omitting quantifiable measures of unsafe and unsound conditions. Without such specific criteria, regulators will continue to exercise wide discretion in determining whether a depository institution’s asset quality deterioration is at a point where enforcement actions are necessary. Similarly, the section 39-based loan documentation standards do not establish specific criteria for regulators to use to assess an institution’s safety and soundness. The regulators believed that general standards provide an acceptable gauge against which compliance can be measured, while at the same time allowing for differing approaches to loan documentation. However, this approach to loan documentation standards differs from the long-standing approach that the regulators have established in their examination manuals. These standards contain specific loan documentation requirements that examiners are to use in assessing the safety and soundness of depository institutions. For example, real-estate construction loan files are to include current financial statements, inspection reports, and written appraisals. Since the section 39 standards do not contain similar documentation requirements, we believe the standards are open to considerable interpretation and do little to limit the wide discretion regulators have in determining whether banks have adequate loan documentation practices. Furthermore, the loan documentation standards do not provide or state a specific level of noncompliance at which enforcement actions will be required. Although it may be difficult to develop quantifiable criteria for making such enforcement decisions, there are various regulatory “rules of thumb” in place that we believe could serve as the basis for triggering mandatory actions. For example, in its 1988 report on the reasons why banks fail, OCC found that banks with loan documentation problems in 15 to 20 percent or more of their loan portfolios were typically operating in an unsafe and unsound manner. As discussed earlier, CDRI amended the section 39 mandate that regulators require a depository institution to file a compliance plan if the institution is found not to be in compliance with the standards. The new provision allows regulators greater flexibility in deciding whether to impose such a requirement. In the July 10, 1995, Notice of Final Rulemaking, the four regulators (OCC, FRS, FDIC, and OTS) stated that they expect to require a compliance plan from any institution with deficiencies severe enough to threaten the safety and soundness of the institution. However, as discussed in the previous section, regulators have not developed quantifiable criteria or other specific guidance for measuring an institution’s compliance with the section 39 safety-and-soundness standards. Therefore, it is not clear how regulators would determine whether an institution’s noncompliance with generally accepted management principles is “severe” enough to warrant regulatory action. In addition, the implementing regulations do not provide any specific criteria for compliance plans beyond those contained in the section 39 legislation. The regulations merely state that compliance plans should identify steps that the institution is to take to correct the identified problems and the time by which the steps are to be taken. In contrast, section 38 and its implementing regulations establish more specific criteria for CRPs. For example, CRPs must specify capital levels that the institution expects to achieve for each year the plans are in effect. In addition, CRPs must show how the institution will comply with any restrictions on its activities under section 38 and the types of businesses and activities in which the institution will engage. Section 38 requires regulators to reject any CRP unless it contains such information, is based on realistic economic assumptions, and would not appreciably increase risk to the institutions. In the absence of similar criteria, there is less assurance that the compliance plans developed under section 39 will consistently result in the prompt remediation of deficiencies. FDICIA contained a number of reforms and provisions that were designed to complement sections 38 and 39. FDICIA’s corporate governance and accounting reform provisions directed depository institutions to improve their corporate governance and the information they report to the regulators. FDICIA also required regulators to revise their risk-based capital standards to ensure that those standards take adequate account of interest rate risk, concentrations of credit, and nontraditional activities. In addition, regulators have stated that their oversight of depository institutions has improved, and they are in the process of modifying their examination approaches to emphasize the monitoring of risk-taking by depository institutions. However, we did not evaluate the effectiveness of these various initiatives because many had not been fully implemented or tested. FDICIA placed a number of new requirements on depository institutions to improve their corporate governance and the information they provide to the regulators. As previously discussed, FDICIA requires all but small (total assets of less than $500 million) depository institutions to submit annual reports to the regulator on the institutions’ financial conditions and management. The report is to include management’s assessment of (1) the effectiveness of the institution’s internal controls and (2) the institution’s compliance with the laws and regulations designated by the regulator. In addition, FDICIA required the institution’s external auditors to report separately on these assertions made by management. Furthermore, FDICIA requires depository institutions to have an independent audit committee composed of outside directors who are independent of institutional management. As we reported in 1993, these new requirements have the potential to significantly enhance the likelihood that regulators will identify emerging problems in banks and thrifts earlier. For example, regulators can use the result of an institution’s management assessments and external auditor’s reviews to identify those areas with the greatest risk exposure. This identification process should allow the regulators to improve the quality and efficiency of their examinations. While these FDICIA requirements may result in the early identification of troubled institutions, they do not ensure that regulators will take consistent supervisory actions to address safety-and-soundness problems before they adversely affect an institution’s capital levels. In response to FDICIA section 305 requirements, regulators have recently undertaken revisions of the risk-based capital standards that they use to implement provisions of section 38. Specifically, regulators have revised or are revising the risk-based capital standards to cover risks associated with concentrations of credit, nontraditional financial products, and interest rate movements. As of September 1996, the revisions to the risk-based capital standards announced by the regulators will not change the capital ratios used for section 38 purposes. Instead, regulators plan to use the examination process to identify institutions that have excessive and poorly managed risk exposure, due to concentrations of credit, nontraditional products, or interest rate risk. Regulators said that they will require such institutions to hold greater levels of capital than those required of other institutions. See appendix I for a more detailed discussion of section 305’s requirements and the regulators’ planned revisions to risk-based capital standards. Regulators have stated that they have learned from their experiences in the 1980s and that their approach to depository institution oversight has changed. The regulators said that they have recognized the need to take proactive steps to prevent institutions from engaging in unsafe and unsound practices. For example, OCC, FRS, and FDIC are developing new examination procedures to better monitor and control bank risk-taking (see app. II). A July 1996 proposal to revise the rating system used by the regulators also reflects the increased emphasis on evaluating an institution’s risk exposure and the quality of its risk management systems. Efforts by the regulators to improve federal oversight through examinations focused on risk management, along with the accounting and corporate governance provisions of FDICIA, could help provide early warning signals of potential safety-and-soundness problems. However, whether this potential for earlier detection will be translated into corrective action is subject to some question because the regulators still have a great deal of discretion under section 39, as amended. Although the section 38 capital standards appear to have played some role in strengthening the condition of the banks and thrifts, other factors have also contributed to this improvement, including lower interest rates and an improving economy. Despite the apparently sound financial condition of the bank and thrift industry, the possibility cannot be ruled out that the current strong performance of the bank and thrift industry is masking management problems or excessive risk-taking that is not being addressed by regulators. For example, the financial press reported in November 1995 and March 1996 that delinquent consumer loans, such as credit card loans, grew considerably during these years and that this growth was partially attributed to lower credit standards. Whether the regulators are more successful in detecting risk management problems and then taking the requisite corrective actions may not be fully known until another downturn in the economy affects the bank and thrift industry. In 1991, Congress enacted FDICIA, in part, because of concerns that the exercise of regulatory discretion during the 1980s did not adequately protect the safety and soundness of the banking system or minimize insurance fund losses. FDICIA’s Prompt Regulatory Action provisions were originally enacted to limit regulatory discretion in key areas and to mandate regulatory responses against financial institutions with safety-and-soundness problems. The implementation of section 38 has provided capital categories and mandated actions that regulators should take if banks or thrifts fall into specific categories. However, section 39, as amended, appears to leave regulatory discretion largely unchanged from what existed before the passage of FDICIA. Sections 38 and 39 provide regulators with additional enforcement tools that they can use to obtain corrective action or close institutions with serious capital deficiencies and/or safety-and-soundness problems. These provisions include the enforcement tool that allows regulators to remove bank officials believed to be the cause of the institution’s problems as well as other actions intended to stop the institution from engaging in risky practices. Moreover, section 38 appears to have encouraged institutions to raise additional equity capital and should help prevent capital-deficient institutions from compounding losses. Despite such benefits, severely troubled institutions may not be subject to mandatory restrictions and supervisory actions under section 38 due to its reliance on capital as the basis for regulatory intervention. In addition, section 39 does not require regulators to take actions against poorly managed institutions that have not yet reached the point of capital deterioration. Legislative and regulatory changes have resulted in the guidelines’ taking the form of broad statements of general banking principles rather than as specific measures of unsafe and unsound conditions. Furthermore, regulators have not established criteria for determining when a institution is in noncompliance with the guidelines. The implementation of FDICIA’s other provisions and various initiatives undertaken by the regulators to improve their examination process may help to increase the likelihood that regulators will take prompt and corrective regulatory action. FDICIA’s accounting and supervisory reforms provide a structure to strengthen corporate governance and to facilitate early warning of safety-and-soundness problems. In addition, regulators have stated that their approach to supervision has changed since the 1980s, and they are developing new examination procedures to be more proactive in monitoring and assessing bank risk-taking. However, we did not evaluate the effectiveness of these initiatives because many of them have not been fully implemented or tested. Therefore, at present, it is difficult to determine if these initiatives will result in the earlier detection of safety-and-soundness problems and, if so, whether regulators will take strong and forceful actions early enough to prevent or minimize future losses to the insurance funds due to failures. In its comments on our report, OCC agreed with our conclusion that sections 38 and 39 may not always result in prompt and corrective regulatory action. Nonetheless, OCC believes that FDICIA’s combination of section 38 mandatory restrictions and the regulatory discretion retained under section 39 allows regulators to tailor their supervision to suit an institution and its particular problems. The Federal Reserve Board of Governors stated that it had no formal comments but that the report appeared to accurately describe the Federal Reserve’s policies, procedures, and practices with respect to the implementation of FDICIA’s Prompt Regulatory Action provisions, as amended. OTS stated that section 38 effectively encourages institutions to avoid becoming or remaining undercapitalized. OTS emphasized that the section 39 standards are untested, and it supported the flexibility built into section 39. OTS believes that existing discretionary supervisory and enforcement tools are adequate to deal with most safety-and-soundness issues, apart from capital. FDIC also supported the discretionary and flexible nature of the section 39 safety-and-soundness standards. FDIC pointed out that the overwhelming number of comments that the regulators received on the section 39 standards were in favor of general rather than specific standards. FDIC stated that the section 39 standards adopted by the regulators minimize regulatory burden while recognizing that there is more than one way to operate in a safe-and-sound manner. We do not disagree that there is a need for some degree of regulatory discretion. Rather, we see the issue as one of striking a proper balance between the need for sufficient regulatory discretion to respond to circumstances at a particular institution and the need for certainty for the banking industry about what constitutes an unsafe or unsound condition and what supervisory actions would be expected to result from those conditions. The subjective nature of the standards continues the wide discretion that regulators had in the 1980s over the timing and severity of enforcement actions. Such discretion resulted in the regulators’ not always taking strong actions early enough to address safety-and-soundness problems before they depleted an institution’s capital. However, we note that the implementation of FDICIA along with various regulatory initiatives undertaken since the passage of FDICIA may help in the earlier detection of institutions with safety-and-soundness problems. These initiatives, along with the regulators’ willingness to use their various enforcement authorities—including sections 38 and 39—to prevent or minimize potential losses to the deposit insurance funds, will be instrumental in determining whether the proper balance between discretion and certainty has been attained. | GAO reviewed the Federal Reserve System's (FRS) and the Office of the Comptroller of the Currency's (OCC) efforts to implement the Federal Deposit Insurance Corporation Improvement Act of 1991 (FDICIA) prompt regulatory action provisions and the impact of those provisions on federal oversight of depository institutions. GAO found that: (1) regulators have taken the required steps to implement FDICIA prompt regulatory action provisions, but have had to use the additional enforcement powers granted by the provisions against a relatively small number of depository institutions; (2) the improved financial condition of banks and thrifts has allowed them to build their capital levels to the point where only a few institutions were considered undercapitalized according to section 38 standards; (3) OCC and FRS generally took prescribed regulatory actions against the 61 undercapitalized banks reviewed; (4) as of September 1996, regulators had not used their section 39 authority; (5) the final two safety and soundness standards, asset quality and earnings, required to fully implement section 39 became effective on October 1, 1996; (6) the guidelines and regulations issued to date by regulators to implement section 39 do not establish clear, objective criteria for what would be considered unsafe and unsound practices or conditions or link the identification of such conditions to specific mandatory enforcement actions; (7) other FDICIA provisions and initiatives recently announced by regulators should help in the early identification of depository institutions with safety and soundness problems; and (8) the success of these provisions and initiatives will be determined by the regulators' willingness to use their enforcement powers early enough to prevent or minimize losses to the deposit insurance funds. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Joint Forces Command, in coordination with the Joint Staff, the services, and other combatant commands and DOD agencies, is responsible for creating and exploring new joint war-fighting concepts, as well as for planning, designing, conducting, and assessing a program of joint experimentation. The Command executed its second large-scale field experiment, Millennium Challenge 2002, this year, and it plans another one in 2004 and others every third year thereafter. These experiments are intended to examine how well the concepts previously explored by the Command in smaller venues will work when applied with the emerging concepts being developed by the services and other combatant commands. For example, Millennium Challenge 2002 tested how well U.S. forces fared against a regional power with a sizable conventional military force and so called “anti-access” capabilities—which can include advanced surface-to-air missiles, antiship missiles and mines, and chemical and biological weapons—and validated the results of earlier experiments to develop the Command’s “rapid decisive” operations concept. The aim of the experiment was to come up with changes that can be made during the current decade. (App. I provides a chronology of major events important to joint experimentation.) Over the next several years, the Command’s experimentation will focus primarily on two concepts: one to develop a standing joint force headquarters to improve joint command and control; another to conduct more effective joint operations through “rapid decisive” operations. In November 2001, the Chairman of the Joint Chiefs of Staff directed that the Command make development of the prototype headquarters its highest near-term priority. Additionally, the Command will develop a number of other concepts aimed at specialized issues or operational problems to support the two primary concepts. Joint experimentation is a continuous process that begins with the development of new operational and organizational concepts that have the potential to improve significantly joint operations (see fig. 1). The Joint Forces Command identifies new joint concepts including those developed by other DOD organizations (such as the Joint Staff, services, and combatant commands) and the private sector and tests them in experiments that range from simple (workshops, seminars, war games, and simulations) to complex (large-scale virtual simulations and “live” field experiments). Appendix II provides additional information on joint experimentation program activities. After analyzing experimentation data, the Command prepares and submits recommendations to the Joint Requirements Oversight Council for review and, ultimately, to the Chairman of the Joint Chiefs of Staff for approval.Before submitting them to the Council, however, the Command submits its recommendations to the Joint Staff for preliminary review and coordination. The recommendations are distributed for review and comment to the Joint Staff directorates, the military services, the combatant commands, and other DOD and federal government organizations. The Council then reviews the recommendations and advises the Chairman of the Joint Chiefs of Staff on whether they should be approved. The changes, if approved, provide the basis for pursuing the capabilities needed to implement a specific operational concept. The Council is also responsible for overseeing the implementation of the recommendations, but it can designate an executive agent, such as the Joint Forces Command, to do so. The Council (or its designated executive agent) is responsible for obtaining the resources needed to implement the recommendations through DOD’s Planning, Programming, and Budgeting System. The Council also assists the Chairman, in coordination with the combatant commands, the services, and other DOD organizations, to identify and assess joint requirements and priorities for current and future military capabilities. The Council considers requirements (and any proposed changes) for joint capabilities across doctrine, organizations, training, materiel, leadership and education, personnel, and facilities. The Department of the Navy’s budget provides funding to the Joint Forces Command for joint experimentation and other Command missions. In fiscal year 2002, the Command received from the Navy about $103 million for its joint concept development and experimentation program, and it planned to spend about half of this amount for Millennium Challenge 2002. The Command has requested that the Navy provide about $98 million for the program in fiscal year 2003. The Command also provides some funds to the services, the combatant commands, and other DOD organizations for efforts that support its program activities. However, the services fund the operations and support costs of forces participating in joint experimentation. Also, the individual experimentation efforts of the services and the combatant commands are funded from within their own budgets. Since it first began joint experimentation, the Joint Forces Command has broadened and deepened the inclusion of other DOD organizations, federal agencies and departments, the private sector, and allies and coalition partners in its process for capturing and identifying new joint ideas and innovations. Organizations participating in joint experimentation are generally satisfied with current opportunities for their ideas to be considered, and many have increased their participation in the program. However, the participation of different stakeholders—the extent of which is determined by the stakeholder—varies considerably and some would like more visits and contacts with the Command. The Command is planning initiatives to increase stakeholder participation in the future, particularly for federal agencies and departments and key allies, but this increase will depend on agency-resource and national-security considerations. As the program gradually evolved, the Joint Forces Command solidified a process to involve the military services, the combatant commands, and other DOD organizations in the planning and execution of its joint experimentation activities. Because future joint operations will involve diplomatic, information, and economic actions, as well as military operations, many DOD, federal, and private organizations and governments participate and provide input into the joint experimentation program (see table 1). The Joint Forces Command functions as a facilitator to solicit and coordinate the involvement of these organizations and incorporate their input, as appropriate, into concept development and experimentation activities. Because the stakeholders determine the extent of their participation in the program, it can vary considerably. However, Joint Forces Command officials stated that participation by the services, the combatant commands, and other DOD organizations has grown steadily since the program was created and continues to grow, as participants become increasingly aware of the strong emphasis that DOD leaders are placing on experimentation. For example, in contrast to the first field experiment in 2000, which had limited involvement by the services, this year’s Millennium Challenge has seen the services more actively involved in early planning, and their individual experiments better coordinated and integrated into the field experiment. Our comparison of participation in the Command’s major field experiment in 2000 with plans for this year’s experiment found a significant increase in the diversity and number of participating organizations and in the number of concepts and initiatives proposed by these organizations. For example, the total number of organizations participating in Millennium Challenge 2002 more than doubled from the prior experiment in 2000 (from 12 to 29 organizations), and the total number of service initiatives increased from 4 to 29. The Command provides several ways for organizations to participate and provide inputs: they can review program plans and strategies; attend meetings, seminars, and workshops; take part in experimentation activities; and use various communication tools such as E-mail, Internet, and video conferencing. Additionally, the Command obtains input from the various experimentation and research and development organizations of the military services and of some combatant commands and DOD organizations. The Command also considers the results of Advanced Concept Technology Demonstrations efforts, innovations, and recent military operations in developing its program. For example, as a result of its operational experiences in Kosovo, the U.S. European Command identified various joint capability shortfalls in its recent list of Command priorities as a means of guiding the Joint Forces Command in selecting focal areas and activities for experimentation. Further, the Command is taking steps to (1) align its experimentation activities with the schedules of major service and combatant command exercises and (2) adjust its program to allow for earlier consideration of new concepts proposed by the services and the combatant commands in the input process. These adjustments would improve synchronization of experiments with the availability of forces and the training schedules of the services and the combatant commands, allow for greater involvement of these entities in the process, and increase the likelihood that joint requirements are sufficiently considered early in the development of concepts. Participating organizations also provide input during the annual preparation of two key joint experimentation-program documents: the Chairman of the Joint Chiefs of Staff’s guidance on joint experimentation and the Joint Forces Command’s Joint Concept Development and Experimentation Campaign Plan (see fig. 2). Each year the Chairman provides guidance to the Joint Forces Command to use in developing its Campaign Plan for joint concept development and experimentation. The basis for the Chairman’s guidance is derived from several sources, including strategy and planning documents, studies, and other assessments. Additionally, key DOD stakeholders, including the Chairman’s Joint Warfighting Capability Assessment teams and the Joint Requirements Oversight Council, provide input to the Joint Staff to use in developing the Chairman’s guidance. The Joint Forces Command uses this guidance, with additional input from DOD stakeholders, in preparing its Campaign Plan, which is the primary vehicle for synchronizing its joint experimentation activities and coordinating resources. The Command also solicits and considers input for the Campaign Plan from some other federal agencies and departments, academia, private sector, and allies. After review and endorsement by the combatant commands, the services, and the Joint Requirements Oversight Council, the Chairman approves the Campaign Plan. Officials at the military services, the combatant commands, and other DOD organizations we talked with said they were generally satisfied with the opportunities for input provided by the Joint Forces Command. At the same time, DOD stakeholders have taken various actions to increase their participation. Some, however, would like more contacts and communication with the Command. The Command is responding with some initiatives. Each service, the Joint Staff, the U.S. Special Operations Command, the U.S. Space Command, as well as some DOD and federal agencies (such as the National Imagery and Mapping Agency and the National Security Agency) have assigned liaison officers at the Joint Forces Command.However, officials at the Central, Pacific, and Southern Commands stated that their staffing levels currently do not allow them to devote personnel in this role. Combatant command officials indicated that the frequency and number of meetings, conferences, and other events held at the Joint Forces Command often make it difficult for their organizations to attend. The officials believe that as a result, the views and positions of their organizations are not always fully captured in some discussions and deliberations. Some of the combatant commands have or are planning to establish their own joint experimentation offices. Officials from the Pacific and Special Operations Commands stated that although their respective joint experimentation offices are largely focused on supporting their own experimentation efforts, the offices provide a cadre of staff who can better coordinate and participate more consistently in the Joint Forces Command’s joint experimentation program. For example, Pacific Command officials said that their own experimentation efforts to improve the command of joint operations over the past few years have contributed to joint experimentation by providing significant insights for the Joint Forces Command’s development of the standing joint-force headquarters concept. Central Command and Southern Command officials said their Commands have plans to establish similar offices soon. While satisfied with their participation and their ability to provide input into the program, officials at some combatant commands believe that a number of things could be done to improve the program, assuming resources are available. They believe that the Joint Forces Command could increase its visits to and participation in combatant-command activities. Some of the officials also believe that if the Joint Forces Command assigned liaison officers to their commands, the Command would obtain first-hand knowledge and a better appreciation of the various commands’ individual requirements. These officials believe that such a presence at their commands would demonstrate the Joint Forces Command’s commitment to joint experimentation and would allow for interaction with staff throughout their commands. The Joint Forces Command does not favor doing this because of the cost and the difficulty in providing the staff necessary to fulfill this role. Officials at the Pacific, Central, and Southern Commands also believe that some level of funding should be provided to the combatant commands for their use in supporting individual command and the Joint Forces Command experimentation efforts. Combatant command officials stated that currently, funds from other command activities must be diverted to support these efforts. Out of concern about the need to improve communications and participation in joint experimentation planning, the Joint Forces Command is planning some initiatives such as the following: It plans to create a virtual planning-center site for joint experimentation on its Intranet to provide DOD stakeholders with easily accessible weekly updates to information on planned experiments; participants; goals and objectives; and ongoing experimentation by the Joint Forces Command, the services, the combatant commands, and DOD agencies. It plans to develop the requirements for the site during fall 2002 and to initiate the project soon after. It established Project Alpha—a “think-tank” group—in early 2002 to provide another source of input and outputs. The project will interface with researchers throughout DOD, Department of Energy national laboratories, private industry, and academia to find cutting-edge technologies for inclusion in service and joint experimentation. This relationship will provide an opportunity for the Joint Forces Command to leverage the work of these organizations and similarly, for these organizations to gain a better understanding of and include their work in the joint experimentation program. As the joint experimentation program matured, participation by non-DOD federal agencies and departments gradually increased. Participation, however, depends upon the agencies’ desire to be involved and their available resources. Lack of involvement could lead to missed opportunities. And participation by allies and coalition partners has been limited by security concerns. The Joint Forces Command’s input process allows individual federal agencies and departments, such as the Departments of State and Justice, to participate in joint experimentation events as they choose. Interagency participation is improving, according to Command officials. For example, federal agencies and departments are participating in Millennium Challenge 2002 to assist the Command in developing its standing joint- force headquarters concept. However, resource and staffing constraints prevent some agencies and departments from taking part in experiments. For example, according to a Joint Forces Command official, the Department of Transportation and the Central Intelligence Agency decided not to send representatives to Millennium Challenge 2002 because of staffing constraints. Not only could non-DOD agencies provide important insights and contributions to joint operations, but also some important opportunities could be missed if these agencies do not consistently participate in joint experimentation events. While federal agencies and departments are beginning to increase their role in joint experimentation, several service and combatant command officials we spoke with believe that greater involvement is needed because of the role these organizations are likely to have in future joint operations. For example, these non-DOD federal agencies and departments would provide support (economic, diplomatic, and information actions) to U.S. military forces in their conduct of operations aimed at defeating an adversary’s war-making capabilities—support that is critical to implementation of the Joint Forces Command’s rapid decisive operations concept. Several DOD (service, combatant command, Office of the Secretary of Defense, and other DOD organizations) officials we spoke with believe that the Joint Forces Command should explore ways to boost the participation and involvement of allies and coalition partners in joint experimentation. Joint Forces Command officials agree and believe that such cooperation would foster a better understanding of allied perspectives, allow the Command to leverage concept development work, expand available capabilities, and facilitate the development of multinational capabilities. The Command recently created a multinational concept-development and experimentation site on its Intranet to facilitate the involvement of allies and coalition partners in joint experimentation. However, some DOD officials believe that the Joint Forces Command should do more because future U.S. military operations will likely be conducted with other countries. The officials stress that other nations’ military personnel should be included in experiments to develop new operational concepts, if these concepts are to be successful. Joint Forces Command officials pointed out, however, that the participation and involvement of other countries are often constrained by restrictions on access to sensitive security information. For example, North Atlantic Treaty Organization countries only participated as observers in Millennium Challenge 2002 because of security information restrictions. The Command, however, plans to develop ways to better handle these restrictions to allow greater participation by other nations in its next major field experiment in 2004. Nearly 4 years after the program was established, only three recommendations have flowed from the joint experimentation program, and none of them have been approved. Confusion about proposed changes in guidance regarding the information required for submitting these recommendations has partly delayed their approval. At the time we concluded our review, official guidance on what information should accompany joint experimentation recommendations had not been approved. In addition, several DOD officials expressed concern that the process used to review and approve recommendations, the same as that used for major acquisition programs, may not be the most appropriate for a program whose aim is to integrate changes quickly. However, the officials could not pinpoint any specific impasses in the approval process. The DOD officials are also concerned about potential delays in the integration of new concepts because of the lengthy DOD resource allocation process. The Joint Forces Command submitted one recommendation to the Chairman of the Joint Chiefs of Staff in August 2001 and two more in November 2001 (see table 2). At the time we ended our review, none of the recommendations had been approved. The recommendations to improve the planning and decision-making capabilities of joint forces and provide better training for personnel conducting theater missile defense operations were based on analyses of results of experiments carried out in the first 3 years of joint experimentation. Inputs included two major experiments: Millennium Challenge 2000 (live field experiment in August-September 2000) and the Unified Vision 2001 (virtual simulation experiment in May 2001). The first recommendation was submitted for review just 3 months after the end of the last experiment. According to a Joint Staff official, however, approval of the recommendations has been delayed because Joint Forces Command and Joint Staff officials were confused about proposed changes in guidance. In May 2001, the Joint Requirements Oversight Council proposed new guidance, which would require that information on costs and timelines be included in joint experimentation recommendations. Prior guidance did not require such information. Although the recommendations went through preliminary review by the Joint Staff, the omission was not caught until the recommendations were to be scheduled for review by the Joint Requirements Oversight Council. Joint Forces Command officials told us that they were not aware of the change in guidance until that time. When we ended our review, Joint Forces Command officials were working with the Joint Staff to assess how much data could be prepared and when. Command officials said that the recommendations will be resubmitted in fall 2002 together with other recommendations emerging from Millennium Challenge 2002. As a result, no recommendations have yet been reviewed or approved. Also, at the time we ended our review, the draft guidance on joint experimentation recommendations had not been approved and issued. This guidance will become especially important because joint experimentation is expected to produce new recommendations more rapidly as the program matures. The requirement for costs and timeline data is consistent with that of recommendations for major weapon-system-acquisition programs. However, joint experimentation officials at the Joint Forces Command believe that requiring this type of information on joint-experimentation recommendations may not be appropriate because (1) these recommendations are generally intended to convince decision makers to develop particular joint capabilities, not specific weapon systems; (2) the new requirement may slow the preparation of future recommendations; and (3) it will be difficult to provide accurate estimates of costs and timelines for recommendations that span further into the future. It is too early to determine whether these concerns are valid. Some DOD officials were also concerned that the system currently used to allocate resources to implement joint-experimentation recommendations—DOD’s Planning, Programming, and Budgeting System—may not be the most efficient because it usually takes a long time to review, approve, and provide funding in future budgets. A recommendation approved in 2002, for example, would not be incorporated into DOD’s budget until 2004 or even later. This delay could result in missed opportunities for more rapid implementation. A Joint Staff official told us that the Joint Staff and the Joint Forces Command recently adjusted the timing of events to better align the joint experimentation process with the Planning, Programming, and Budgeting System. Additionally, DOD established a special fund for the Joint Forces Command to use as a temporary funding source to speed up the implementation of certain critical or time-sensitive recommendations. This source will provide early funding for implementation until funding is provided through DOD’s Planning, Programming, and Budgeting System. However, Joint Forces Command and other DOD officials believe other ways to implement new joint capabilities within the framework of existing budget and oversight practices may need to be considered. DOD has been providing more specific and clearer guidance on its goals, expectations, and priorities for the joint experimentation program. Nevertheless, the management of joint experimentation is missing a number of key elements that are necessary for program success: some roles and responsibilities have not yet been defined; current performance measures are not adequate to assess progress; and the Joint Forces Command lacks strategic planning tools for the program. DOD officials stated that the joint experimentation program had difficulty in its first years because guidance was evolving and was not specific: DOD’s transformation goals were not adequately linked to transformation efforts, and roles and responsibilities were not clearly defined. Over time, the Secretary of Defense and the Chairman of the Joint Chiefs of Staff have provided more specific guidance on the goals and expectations for joint experimentation and its contribution to DOD’s transformation efforts. Guidance for joint experimentation has evolved gradually over the program’s nearly 4-year life span, partly because of shifting defense priorities and lack of clarity about the roles of various DOD stakeholders. Roles and responsibilities have also matured with the program. The Secretary of Defense’s 2001 Quadrennial Defense Review Reportestablished six transformation goals, which include improving U.S. capabilities to defend the homeland and other bases of operations, denying enemies sanctuary, and conducting effective information operations. According to DOD officials, the Secretary of Defense’s most recent planning guidance tasked the Joint Forces Command to focus its experimentation on developing new joint operational concepts for these goals. To begin meeting these goals, the Chairman has also provided the Joint Forces Command with clarifying guidance that identified specific areas for the Command to include in its experimentation, such as the development of a standing joint-force headquarters concept and of a prototype to strengthen the conduct of joint operations. The Command has reflected this new guidance in its latest Joint Concept Development and Experimentation Campaign Plan. Additionally, the Secretary of Defense reassigned the Command’s geographic responsibilities to focus it more clearly on its remaining missions, particularly transformation and joint experimentation. DOD officials at both headquarters and the field believe that the recent guidance begins to provide a better framework for the Joint Forces Command to establish and focus its joint experimentation efforts. Some officials, however, believe that future guidance should further clarify the link between joint experimentation and DOD priorities and the required resources necessary to support joint experimentation. DOD, in its comments to a draft of this report, stated that it expects the Transformation Planning Guidance—currently being prepared by the Office of the Secretary of Defense—will establish the requirements necessary to link experimentation to changes in the force. While roles and responsibilities for DOD organizations are now broadly defined, the new DOD Office of Force Transformation’s role in joint experimentation and its relationship to other stakeholders have not yet been clearly established. The Office’s charter or terms of reference have not been released. DOD plans to issue a directive later this year that will include a charter and description of the Office’s authorities and responsibilities. However, there is still uncertainty about the extent of authority and involvement the Office will have in the joint experimentation program and the Office’s ability to link the program with DOD’s overall transformation efforts. Joint Forces Command and other DOD officials consider having a transformation advocate in the Office of the Secretary of Defense as a beneficial link between the Joint Forces Command’s, the services’, and the combatant commands’ joint experimentation programs and DOD’s overall transformation agenda. According to DOD’s 2001 Quadrennial Defense Review Report, the Office of Force Transformation, created in November 2001, is to play a role in fostering innovation and experimentation and should have an important responsibility for monitoring joint experimentation and for providing the Secretary of Defense with policy recommendations. An Office of Force Transformation official told us that the Office will be an advocate for transformation and will help develop guidance and make recommendations on transformation issues to the Secretary of Defense (the Office provided comments on the Secretary’s annual planning guidance and developed instructions for the services on preparing their first transformation road maps). The Office has also decided to take a cautious approach in carrying out its mission because of possible resistance from other DOD organizations, the same official said. The Office plans to offer its assistance to DOD organizations in their transformation efforts and attempt to influence their thinking on key issues, rather than asserting itself directly into their efforts, for example by funding military use of existing private-sector technology to act as a surrogate for evaluating possible concepts, uses, and designs. Joint Forces Command officials stated that as of May 2002, they had had only limited discussions with the Office and had not established any working agreements on how the Office would participate in the joint experimentation program. The Office of Force Transformation has only recently assembled its staff and is beginning to plan its work and establish contacts within DOD and with other organizations. The Office’s budget for fiscal years 2002 and 2003 is about $18 million and $35 million, respectively. DOD’s performance measures (or metrics) for assessing joint experimentation—by measuring only the number of experiments carried out—do not provide a meaningful assessment of the program’s contribution toward meeting its performance goal for military transformation because they are only quantitative. Consistent with good management practices and in order to effectuate the purposes of the Government Performance and Results Act of 1993, federal agencies devise results-oriented metrics that provide an assessment of outcomes or the results of programs as measured by the difference they make. In its fiscal year 2000 performance report, the most recent it has issued, DOD described the performance indicators for the joint experimentation program in terms of the number of experiments conducted against a target goal for the prior, current, and following fiscal years. In fiscal year 2000, DOD exceeded its target number of experiments and did not project any shortfalls in meeting its target in the next fiscal year. Although this measure does provide a quantitative assessment of experimental activity, it does not provide a meaningful method for assessing how joint experimentation is helping to advance military transformation. An Office of the Secretary of Defense official stated that DOD recognizes that better performance measures are needed for assessing how joint experimentation advances transformation and for two other metrics currently used to assess its military transformation goal. The official stated that developing such measures is a challenge because joint experimentation does not easily lend itself to traditional measurement methods. For example, most programs consider a failure as a negative event, but in joint experimentation, a failure can be considered as a success if it provides insights or information that is helpful in evaluating new concepts or the use of new technologies. An Office of the Secretary of Defense official told us that the RAND Corporation and the Institute for Defense Analyses recently completed studies to identify possible performance measures for assessing the progress of transformation. DOD is evaluating them and is preparing the Transformation Planning Guidance to provide more specific information on the priorities, roles, and responsibilities for executing its transformation strategy. The same official stated that the new guidance will include a discussion of the types of performance measures needed for assessing transformation progress or will assign an organization to determine them. In either case, measures will still need to be developed and implemented. DOD plans to issue the new guidance later in 2002 but has not determined how new performance measures would be incorporated into its annual performance report. The Joint Forces Command has not developed the strategic planning tools—a strategic plan, an associated performance plan, and performance- reporting tools—for assessing the performance of the joint experimentation program. Strategic planning is essential for this type of program, especially considering its magnitude and complexity and its potential implications for military transformation. Such planning provides an essential foundation for defining what an organization seeks to accomplish, identifies the strategy it will use to achieve desired results, and then determines—through measurement—how well it is succeeding in reaching results-oriented goals and achieving objectives. Developing strategic-planning tools for the joint experimentation program would also be consistent with the principles set forth in the Government Performance and Results Act of 1993, which is the primary legislative framework for strategic planning in the federal government. The Joint Forces Command prepares an annual Joint Concept Development and Experimentation Campaign Plan that broadly describes the key goals of its program, the strategy for achieving these goals, and the planned activities. However, a February 2002 progress report, prepared by the Joint Forces Command’s Joint Experimentation Directorate, on the development of the Directorate’s performance management system indicated that one-fourth of those organizations providing feedback on the Campaign Plan believed that the Plan lacks specificity in terms of the program’s goals and objectives and an associated action plan that outlines the activities to be carried out in order to achieve those goals. Officials we spoke with at the military services, the combatant commands, and the Joint Forces Command all cited the need for more specific and clearer goals, objectives, and performance measures for the program. In the progress report, the Command acknowledged the benefits of strategic planning and the use of this management tool to align its organizational structure, processes, and budget to support the achievement of missions and goals. The report proposed that the Command develop a strategic plan, possibly by modifying its annual Campaign Plan, and subsequently prepare a performance plan and a performance report. Command officials indicated that the basic requirements of a strategic plan could be incorporated into the Campaign Plan, but they were unsure, if such an approach were taken, whether the changes could be made before the annual Campaign Plan is issued later this year. Similarly, the Joint Forces Command has had difficulty in developing specific performance measures for joint experimentation. A Command official stated that the Command has tried to leverage the performance measures developed by other organizations like itself, but found that there is widespread awareness throughout the research and development community, both within and outside DOD, that such measures are needed but do not exist. Additionally, a Joint Forces Command official stated that whatever metrics the Command develops must be linked to its mission-essential tasks for joint experimentation and that the Command is currently developing these tasks. At the time we ended our review, the Command had identified six broad areas for which specific metrics need to be developed. These included quality of life, customer relationships, and experimentation process management. After nearly 4 years, the Joint Forces Command’s process for obtaining inputs for the development and execution of DOD’s joint experimentation program has become more inclusive. However, questions continue about whether the program is the successful engine for change envisioned when it was established. Since the program’s inception, only three recommendations have flowed from experimentation activities and their review, approval, and implementation have been delayed from confusion over a change in guidance that required additional information be included in the recommendations. As a result, no recommendations for change have been approved or implemented to date. To the extent that the draft guidance on what should be submitted with joint experimentation recommendations can be officially approved and issued, future recommendations could be submitted for approval and implementation more quickly. Underscoring the need to finalize the guidance is the anticipated recommendations to be made after this year’s major field experiment, Millennium Challenge 2002. The lack of strategic planning for joint experimentation deprives the Joint Forces Command of necessary tools to effectively manage its program. Implementation of strategic planning at the Joint Forces Command would create a recurring and continuous cycle of planning, program execution, and reporting and establish a process by which the Command could measure the effectiveness of its activities as well as a means to assess the contributions of those activities to the operational goals and mission of the program. Such planning could also provide a tool—one that is currently missing—to identify strengths and weaknesses in the development and execution of the program and a reference document for the effective oversight and management of the program. Performance measures developed under the Command’s strategic planning could provide the standard for assessing other experimentation efforts throughout DOD, which are also lacking such metrics. The lack of a meaningful performance measure for assessing the contribution of the joint experimentation program to advance DOD’s transformation agenda limits the usefulness and benefit of this management tool to assist congressional and DOD leaders in their decision-making responsibilities. Establishing a “meaningful” joint experimentation performance measure for its annual performance report would provide congressional and DOD leadership a better assessment of the program’s contribution and progress toward advancing transformation. Such a metric would also be consistent with the intent of the Results Act to improve the accountability of federal programs for achieving program results. Because the role and relationships of the Secretary of Defense’s new Office of Force Transformation have not yet been clarified, the Secretary may not be effectively using this office in DOD’s transformation efforts. This office, if given sufficient authority, could provide the Secretary with a civilian oversight function to foster and monitor the joint experimentation program to ensure that it is properly supported and provided resources to advance the DOD’s overall transformation agenda. Rectifying these shortcomings is critical in view of the importance that DOD has placed on joint experimentation to identify the future concepts and capabilities for maintaining U.S. military superiority. To improve the management of DOD’s joint experimentation program, we recommend that the Secretary of Defense direct the Chairman of the Joint Chiefs of Staff to approve and issue guidance that clearly defines the information required to accompany joint experimentation recommendations for the Joint Requirements Oversight Council’s review and approval and require the Commander in Chief of the U.S. Joint Forces Command to develop strategic planning tools to use in managing and periodically assessing the progress of its joint experimentation program. We further recommend that the Secretary of Defense develop both quantitative and qualitative performance measures for joint experimentation in DOD’s annual performance report to provide a better assessment of the program’s contribution to advancing military transformation and clarify the role of the Office of Force Transformation and its relationship to the Chairman of the Joint Chiefs of Staff, the Joint Forces Command, and other key DOD stakeholders in DOD’s joint experimentation program. We received written comments from DOD on a draft of this report, which are included in their entirety as appendix III. DOD agreed with our recommendations and indicated that it expects that a forthcoming Transformation Planning Guidance and subsequent guidance will be responsive to them by clarifying roles and missions across DOD, implementing recommendations for changes, and establishing clear objectives. We believe such strategic guidance from the Secretary of Defense could provide a significant mechanism for better linking and clarifying the importance of the joint experimentation program with DOD’s transformation agenda. DOD also provided technical comments to the draft that were incorporated in the report where appropriate. To determine the extent to which the Joint Forces Command obtains input from stakeholders and other relevant sources in developing and conducting its joint experimentation activities, we reviewed an array of documents providing information about participants in joint experimentation, including guidance and other policy documents, position papers, fact sheets, reports, and studies of the military services, the combatant commands, the Joint Staff, and other DOD organizations. We also reviewed Joint Forces Command plans and reports. Additionally, we made extensive use of information available on public and DOD Internet web sites. To assess the change in participation by various stakeholders over time, we compared the differences in the numbers of participating organizations and initiatives provided by these organizations between the Joint Forces Command’s first two major field experiments in 2000 and 2002 (Millennium Challenge 2000 and Millennium Challenge 2002). We conducted discussions with officials at five combatant commands, the Joint Staff, the military services, and other DOD organizations, such as the Joint Advanced Warfighting Program and the Defense Advanced Research Projects Agency. Appendix IV lists the principal organizations and offices where we performed work. At the Joint Forces Command, we discussed with joint experimentation officials the process for soliciting and incorporating inputs for joint experimentation from the military services and the combatant commands. We also attended conferences and other sessions hosted by the Joint Forces Command to observe and learn about joint experimentation participants and their contributions and coordination. For example, we attended sessions for the Command’s preparation of its annual Joint Concept Development and Experimentation Campaign Plan and planning for this year’s Millennium Challenge experiment. With officials from each of the services and the combatant commands, we discussed perceptions of the effectiveness of coordination and participation in joint experimentation. We also obtained observations about participants’ involvement from several defense experts who track joint experimentation and military transformation. Although we did not include a specific assessment of the individual experimentation efforts of the services and combatant commands, we did discuss with service and command officials how their efforts were coordinated and integrated into joint experimentation. We also did not determine the extent that individual inputs obtained from various participating organizations were considered and incorporated into the joint experimentation program. To determine the extent to which recommendations flowing from the joint experimentation process have been approved and implemented, we reviewed and analyzed data that tracked the progress of the first three joint experimentation recommendations submitted by the Joint Forces Command. We also obtained and analyzed relevant guidance and held discussions with Joint Staff, Joint Forces Command, and Office of the Secretary of Defense officials on the Joint Requirements Oversight Council process for reviewing and approving joint experimentation recommendations. We also discussed issues relating to implementation of joint experimentation recommendations through DOD’s Planning, Programming, and Budgeting System. To assess whether key management elements, such as policy, organization, and resources, were in place for the program, we conducted a comprehensive review of current legislative, policy, planning, and guidance documents and reports and studies. We used the principles laid out in the Government Performance and Results Act of 1993 as an additional benchmark for assessing the adequacy of performance measures established for the program and of tools used to manage the program. We also discussed the status and evolution of joint experimentation oversight and management, including office roles and responsibilities and joint experimentation metrics, with officials at the Joint Forces Command, the Joint Staff, the services, the combatant commands, the Office of the Secretary of Defense, the Office of Force Transformation, and other DOD organizations. Several defense experts who follow joint experimentation and military transformation discussed with us joint experimentation oversight and management and gave us their impressions regarding current joint experimentation management practices. Our review was conducted from October 2001 through May 2002 in accordance with generally accepted government auditing standards. We are sending copies of this report to interested congressional committees, the Secretary of Defense, the Chairman of the Joint Chiefs of Staff, and the Commander in Chief, U.S. Joint Forces Command. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact Richard G. Payne at (757) 552-8119 if you or your staff have any questions concerning this report. Key contacts and contributors to this report are listed in appendix V. DOD’s Report of the Quadrennial Defense Review issued. Secretary of Defense designated Commander in Chief, U.S. Joint Forces Command, as executive agent for joint experimentation. Joint Advanced Warfighting Program established. Relevance to joint experimentation This vision of future war fighting provides a conceptual template for the Department of Defense’s (DOD) transformation efforts across all elements of the armed forces. Report discussed the importance of preparing for future national security challenges. It concluded that DOD needed to institutionalize innovative investigations, such as war-fighting experiments, to ensure future concepts and capabilities are successfully integrated into the forces in a timely manner. The Secretary of Defense tasked the Joint Forces Command to design and conduct joint war-fighting experimentation to explore, demonstrate, and evaluate joint war-fighting concepts and capabilities. DOD established the program at the Institute for Defense Analyses to serve as a catalyst for achieving the objectives of Joint Vision 2010 (and later Joint Vision 2020). To that end, the program is to develop and explore breakthrough operational concepts and capabilities that support DOD’s transformation goals. Joint concept development and experimentation program initiated. Joint Forces Command assumed responsibility as the executive agent for joint experimentation. Joint Advanced Warfighting Program conducted the first joint experiment for Joint Forces Command. An experiment—J9901—that investigated approaches for attacking critical mobile targets. Experiment allowed the Joint Forces Command to begin its learning process on how to conduct joint experimentation. Report proposed several recommendations to promote military transformation. Report of the Defense Science Board Task Force on DOD Warfighting Transformation issued. Chairman of the Joint Chiefs of Staff issued Joint Vision 2020. Millennium Challenge 2000 major field experiment conducted. Chairman of the Joint Chiefs of Staff issued updated Joint Vision Implementation Master Plan. Transformation Study Report: Transforming Military Operational Capabilities issued. Joint Forces Command conducted Unified Vision 2001 experiment. Secretary of Defense’s planning guidance issued. DOD’s Quadrennial Defense Review Report issued. Updated vision statement described the joint war-fighting capabilities required through 2020. The first major field experiment coordinated by the Joint Forces Command among the services and other stakeholders. Guidance described the process for generation, coordination, approval, and implementation of recommendations emerging from joint experimentation and defined the roles and responsibilities of DOD stakeholders. Study conducted for the Secretary of Defense to identify capabilities needed by U.S. forces to meet the twenty-first century security environment. Made several recommendations directed at improving joint experimentation. A major joint experiment—largely modeling and simulation— conducted to refine and explore several war-fighting concepts, such as “rapid decisive” operations. Required studies by defense agencies and the Joint Staff to develop transformation road maps and a standing-joint-force headquarters prototype. The report established priorities and identified major goals for transforming the Armed Forces to meet future challenges. It called for new operational concepts, advanced technological capabilities, and an increased emphasis on joint organizations, experimentation, and training. Event Chairman of the Joint Chiefs of Staff issued joint experimentation guidance. Office of Force Transformation established. Unified Command Plan 2002 issued. Secretary of Defense’s planning guidance issued. Joint Forces Command conducted Millennium Challenge 2002. Relevance to joint experimentation The guidance directed the Joint Forces Command to focus its near- term experimentation on developing a standing joint force headquarters prototype. Office assists the Secretary of Defense in identifying strategy and policy, and developing guidance for transformation. Plan reduced the number of missions assigned to the Joint Forces Command to allow the Command to devote more attention to its remaining missions such as joint experimentation. The guidance directed the Joint Forces Command to develop new joint concepts that focus on the six transformation goals set forth in the 2001 Quadrennial Defense Review Report. Second major field experiment conducted to culminate a series of experiments to assess “how” to do rapid decisive operations in this decade. The Joint Forces Command uses various types of assessment activities to develop, refine, and validate joint concepts and associated capabilities. As shown in figure 3, the Command begins to move through the five joint concept development phases by conducting workshops, seminars, and war games to develop information and identify possible areas to explore in developing new concepts and associated capabilities and then uses simulated or live experiment events to confirm, refute, or modify them. These activities vary in scale and frequency, but each activity becomes larger and more complex. They can involve a small group of retired flag officers and academics, up to 100 planners, operators, and technology experts, or several thousand in the field. Near the end of the process, the Command will conduct a large-scale simulation experiment (such as Unified Vision 2001), followed by a major field experiment (such as Millennium Challenge 2002). The process continuously repeats itself to identify additional new concepts and capabilities. Table 3 provides additional information about the characteristics, scale, and frequency of these and other associated activities and experiments. Office of the Secretary of Defense, Program Analysis and Evaluation Office of the Under Secretary of Defense for Policy Office of the Under Secretary of Defense for Acquisition, Technology, Joint Advanced Warfighting Program Defense Advanced Research Project Agency Office of Force Transformation Operational Plans and Interoperability Directorate Joint Vision and Transformation Division Command, Control, Communications, and Computers Directorate Force Structure, Resources, and Assessment Directorate Directorate of Training Directorate of Integration Directorate for Strategy, Concepts, and Doctrine Office of the Deputy Chief of Naval Operations for Warfare Marine Corps Combat Development Command Department of the Air Force Booz Allen Hamilton The Carlyle Group Center for Strategic and Budgetary Assessments Hicks & Associates, Inc. In addition to the individuals named above, Carol R. Schuster, Mark J. Wielgoszynski, John R. Beauchamp, Kimberley A. Ebner, Lauren S. Johnson, and Stefano Petrucci made key contributions to this report. | The Department of Defense (DOD) considers the transformation of the U.S. military a strategic imperative to meet the security challenges of the new century. In October 1998, DOD established a joint concept development and experimentation program to provide the engine of change for this transformation. In the nearly 4 years since becoming the executive agent for joint concept development and experimentation, the Joint Forces Command has increased in participation of key DOD stakeholders--the military services, the combatant commands, and other organizations and agencies--in its experimentation activities. The Command has also expanded the participation of federal agencies and departments, academia, the private sector, and some foreign allies. No recommendations flowing from joint experimentation have been approved or implemented. Although the Joint Forces Command issued three recommendations nearly a year ago, they were not approved by the Joint Requirements Oversight Council because of confusion among the Joint Staff and the Joint Forces Command about a proposed change in guidance that required additional data be included when submitting these recommendations. Although DOD has been providing more specific and clearer guidance for joint experimentation, DOD and the Joint Forces Command are missing some key management elements that are generally considered necessary for successful program management. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Travel and transportation expenses for transferred employees, new appointees, or student trainees, including moving expenses and relocation programs, among other aspects of the relocation programs, are authorized by 5 U.S.C. §§ 5721-5739. Agencies are authorized to pay the expenses for the sale of a current employee’s residence if it is in the interest of the government. Agencies are also authorized to hire contractors to administer these services. Agencies contract with relocation management companies to manage home sale assistance. These companies either purchase or facilitate the purchase of a relocating employee’s home. This allows agencies to relocate employees quickly, without the employee facing a financial burden for maintaining a home in both the old and the newly assigned duty station. Home sale assistance can also be used to address mission critical skills occupations, which are one or more of the following: a staffing gap in which an agency has an insufficient number of individuals to complete its work or a competency gap in which an agency has individuals without the appropriate skills, abilities, or behaviors to successfully perform the work. Agencies can provide relocating employees with home sale assistance through AVO, Amended Value Sale (AVS), and Buyer Value Option (BVO). Under AVO, the relocation management company buys an employee’s home for its appraised value if it cannot be sold during a stated period of time. A specified number of appraisers determine the value of the home and the average is the appraised value. This provides the relocating employee earlier access to the equity from the former home that can be used toward a home at the new duty station. AVS allows an employee approved for AVO to find a buyer willing to pay a higher price than the appraised value of the home before an employee has accepted the appraised value offer from the relocation management company. Once the employee receives a bona fide offer, they can sell the house or if the offer falls through, then the relocation management company purchases the house for the offered price. Under BVO, the relocation management company purchases an employee’s home after a bona fide offer from a buyer has been made. According to GSA officials, appraisals, which can cost up to $3,000, are typically only conducted for BVO after the employee has been marketing the home for 6 months. The average fees for federal agencies, including VA, using the GSA contract described below in fiscal year 2015 were more than twice as much for AVO than for AVS and BVO. Specifically, the average fees were 25 percent for AVO, 11 percent for AVS, and 10 percent for BVO. Similarly, VA’s AVO fees were also more than twice as much for AVO than for AVS and BVO. The fees for each are a percentage of the sales price of the home. In fiscal year 2015, about 60 percent of homes sold via GSA’s contract were AVS or BVO and the remainder were AVO. In fiscal year 2015, about 17 percent of homes sold under VA’s home sale program were AVO and the others were AVS or BVO. GSA’s role in the employee relocation process includes issuing regulations that apply to all federal agencies, managing a contract that relocation management companies and agencies can use, and providing assistance and guidance to agencies. GSA issues the Federal Travel Regulation (regulation) which includes travel, transportation, and relocation policies, rules for relocation allowances, and agency reporting requirements to GSA. GSA has specific authority to issue regulations governing travel and transportation expenses, including relocation allowances. The regulation also outlines employee eligibility requirements, agency responsibilities (including rules for setting internal policies before authorizing relocation allowances) the timing of authorization processes, and who can authorize and approve relocation expenses. In addition, agencies are required to report relocation activities to GSA if they spend more than $5 million a year on travel and transportation, including relocation expenses. Ultimately, however, GSA officials stated that GSA does not have enforcement authority over agency compliance with the regulation and can only issue non-binding regulation guidance. According to GSA officials, GSA works with industry experts and agency representatives to develop a contract for home sale assistance that agencies can use to work with relocation management companies to provide home sale assistance to employees. The contract includes vendor requirements such as a statement of work. Within the confines of the contract, agencies can tailor relocation assistance requirements to fit their needs. GSA also provides guidance and assistance that is available to all agencies in three ways, according to GSA officials: (1) GSA hosts bi-monthly agency teleconferences, (2) GSA hosts an annual forum, and (3) GSA provides one-on-one assistance to agencies. In addition, according to Office of Personnel Management officials, the Office of Personnel Management plays a relatively minor role in home sales and federal agencies are not required to report to the Office of Personnel Management on home sales and their use of related relocations. The Office of Personnel Management has a review and oversight role of agencies offering relocation programs if federal guidelines are not followed, and Office of Personnel Management officials stated that they had not seen documentation of use of AVO in their reviews of agencies’ personnel files. VA has a process both for approving the use of AVO and for employees’ participation. In late 2016, VA clarified the AVO approval process by stating that approval must be obtained before initiating recruitment efforts. VA requires a written justification for offering AVO in a job announcement. The justification must be based on the critical need for the position and difficulty in recruiting for the position without offering AVO, substantiated by recent unsuccessful recruitment efforts. This is a new policy since 2015, according to VA officials. The decision to use AVO is to be made by the hiring manager in consultation with the human resource specialist. The human resource specialist is to provide consultation to help determine whether the position is designated as difficult to fill or will meet a critical need. The job opportunity announcement is to clearly state whether AVO is or is not authorized. In addition, multiple employees are responsible for making sure that the approval process is correctly implemented, including the hiring official at the employee’s new post, the human resources office, and the assigned approving officials. There is also a process for employees’ participation. Employees authorized to use AVO are required to participate in home sale counseling provided by the relocation contractor and cannot list their home until their travel authorization has been approved. According to VA officials, counseling includes asking employees a series of questions to determine if their home is eligible for participation, such as whether the home is the employee’s current residence. Employees are required to also list their homes for sale within 90 days of initiation with the relocation services contractor. After the relocation contractor provides the appraised value of the home, employees have 60 calendar days to either decline or accept if an offer is not made by an outside buyer. The employee is also required to meet marketing and inspection requirements to accept the appraised value offer. In addition, the regulations require a service agreement that specifies the obligated service period after relocation for which the employee must serve in the government in order to avoid incurring a debt to the government. If a service agreement is violated (other than for reasons beyond the employee’s control and which must be accepted by the agency), the employee would be required to reimburse all costs that the agency had paid toward relocation expenses, including withholding tax allowance and relocation income tax allowance. As shown in figure 1, between fiscal years 2012 and 2016 federal agencies’ spending on AVO, which includes VA, and the number of homes bought through GSA’s contract differed. According to GSA officials, about 80 percent of all federal agencies’ home sale transactions, which includes VA, are done through GSA contract. GSA officials said that the variation from fiscal year 2012 to 2016 was a result of changing agency relocation needs from year to year to meet mission requirements, fluctuating real estate markets, and the location and value of the homes. As shown in figure 2, VA’s spending on AVO between fiscal years 2012 and 2016 also varied. It dropped from a high of over $3.5 million and 51 home sale transactions in fiscal year 2014 to a low of about $80,000 and 1 home sale transaction in fiscal year 2016. VA officials stated more was spent on AVO in fiscal year 2014 because the fees for AVO were higher that year and home sale prices increased as real estate markets recovered. The sharp decline in the VA home sale count and expenditures in fiscal year 2016 is due to VA’s suspension of AVO in October 2015 after the VA Inspector General investigation. VA’s fiscal year 2016 appropriations prohibited, among other things, the use of funds for AVO for Senior Executive Service employees unless certain conditions were met, a waiver from the Secretary was obtained, and Congress was notified within 15 days. The one employee for whom VA used fiscal year 2016 funds was not in the Senior Executive Service. Thus the statutory prohibition was not applicable. Most of the 20 agencies with an operational AVO that completed the questionnaire we sent them reported they rely on AVO policies that include two types of internal controls. An internal control is a process affected by an entity’s oversight body, management, and other personnel that provides reasonable assurance the entity’s objective will be achieved. In the context of AVO, policies that include two types of internal controls are critical. First, actions built directly into operational processes to support the entity in achieving its objectives and addressing related risks are transaction control activities. For example, 18 of the 20 agencies reported the AVO approval process must be complete before payments are made. In addition, 17 of the 20 agencies reported the approval process for AVO is included in the agency’s written policies. Second, assessing and responding to misconduct risks includes considering how misuse of authority or position can be used for personal gain. For example, 19 of the 20 agencies reported their AVO had safeguards to prevent AVO from being used for the personal gain of employees. An agency could strengthen the approval process for its permanent change of station program by requiring an independent review to ensure moves and expenses are appropriate and justified. While the 20 agencies with an operational AVO that completed our questionnaire reported they had not examined whether AVO improved recruitment or retention of staff during fiscal years 2012 to 2016, 12 of the 20 agencies anecdotally provided examples of how AVO has been beneficial. For example, 4 agencies reported AVO minimized the financial risks or burdens of employees who are relocating, such as not having two mortgages. Four other agencies reported AVO assisted them in recruiting the most qualified employees or assisted them in recruiting and retaining employees for hard-to-fill positions. Four agencies reported AVO assists in filling positions in rural areas or areas with depressed real estate markets. In addition, 7 of the 20 agencies with an operational AVO stated they use AVO for mission critical skills, such as medical officers, engineers, and courthouse protection positions. Fourteen of the 20 agencies with an operational AVO reported GSA had provided assistance or guidance to them. Two of the 14 agencies also reported additional assistance from GSA would be helpful. One agency reported it would like training for individuals who administer AVO and another agency reported it would like assistance on negotiating lower fees. In addition, 2 agencies with an operational AVO described the following practices they implemented based on lessons learned from their administration of AVO. One agency stated that providing pre-clearance for employees to participate in AVO can save the agency time initiating AVO. This agency started using a pre-clearance form that asks employees questions to ensure they meet basic eligibility criteria, for example whether or not the house is under foreclosure or has a lien on it. If the house does not qualify, the agency is spared the time spent initiating AVO. The agency has not quantitatively tracked the effect of this pre- clearance, but stated that it found it helpful. The agency plans to look for ways to improve the pre-clearance form. Another agency stated employees need coaxing to find buyers for their homes and depend on AVO to avoid carrying two mortgages. This agency instituted an optional program that provides relocating employees with housing allowances for their move as well as an increased bonus for selling the home to an outside buyer, if the employee keeps the home on the market after the AVO offer is provided. The agency plans to continue developing more effective communication for employees to understand relocation assistance and promote AVS. This pilot program was approved by GSA. GSA officials told us VA or another agency could apply to implement a similar but not identical pilot program that is unique in order to determine if there are similar benefits or cost savings which are in the interest of the government. However, according to GSA officials, after a pilot program is determined to be successful, GSA’s Office of Government-wide Policy could choose to draft a legislative proposal to Congress, requesting to statutorily permit other agencies to implement the same program. In interviews with GSA officials, they noted the following good practices which they believe agencies should incorporate into their AVO. These are based on six lessons learned in GSA’s role issuing regulations, managing the contract that agencies can use, and providing assistance and guidance as follows: When mission allows, agencies should implement the more cost- effective BVO home sale assistance before referring a home to a more expensive option, such as AVO. Pre-decision counseling helps minimizes the number of employees who start the home sale process and then drop out. Agencies should cap the home listing price at no more than 110 percent of the appraised value. Houses priced too high will have few interested buyers and will stay on the market longer, thus increasing an agency’s costs. A relocating employee should start working with the agency’s relocation management company early in the home sale process rather than after the employee has been unable to sell the home. Agencies increase their potential for more cost-effective home sale transactions when homes are marketed effectively from the outset. Agencies can reduce service fees by requiring use of the relocation management company network real estate agents when they list the house. The network real estate agent will then pay the relocation management company a referral fee which will result in lower costs for the agency. Regular meetings with relocation management companies to review the status of each transferee keep agencies apprised of what the agency can do to encourage transferees to be more engaged in selling their homes. This results in higher sales and lower contractor fees. We examined the extent to which VA’s AVO included the good practices based on lessons learned from GSA. We found that VA’s AVO included all of these practices. For example, VA offers pre-decision counseling and VA employees work with the relocation company before their home is put on the market. In addition, before participation in AVO, VA asks the employee questions to ensure the home to be sold meets basic criteria. VA conducted two recent reviews that had recommendations related to AVO. According to VA officials, the two reviews resulted in VA updating its AVO approval process and adding the updated process to VA’s human resources handbook on aids to recruitment. VA also updated its financial policy in December 2016 to include an annual review of historical data related to VA’s home sale program that will include examining homes sale transaction costs and median home sale values. As shown in table 1, VA implemented or closed all of the review’s recommendations related to AVO. In addition, VA implemented new AVO policies that include internal controls since fiscal year 2016 when VA suspended AVO, as shown in table 2. VA’s approval process for AVO is a case-by-case approval granted by different officials for Senior Executive Service and non-Senior Executive Service employees. For Senior Executive Service employees, the policy is now that a secretarial waiver is needed and Congress is notified of the need to fill the position. The Senior Executive Service-waiver provision and congressional notification requirement were enacted in VA’s fiscal year 2016 appropriation as applicable to funds appropriated for that act for employees of the department in a senior executive position participating in the Home Marketing Incentive Program or AVO. However, there is no current statutory mandate for the VA’s policy regarding the Senior Executive Service-waiver and congressional notification requirement. For non- Senior Executive Service employees, under secretaries, assistant secretaries, and other key officials serve as the approving officials. VA officials stated that as a result of the Inspector General’s 2015 report, they identified a need for additional training for human resources officials on relocation and recruitment, including AVO. VA officials told us they developed a training module on relocation and recruitment. A webinar using the module was conducted in March of 2017, according to VA officials under VA’s policy. VA collects some data on the use of AVO, including how much is spent and the number of completed AVO transactions. VA also collects data on whether the employees who used AVO were in the Senior Executive Service and on the employees’ occupational codes. For example, VA reported that it had 38 completed AVO transactions in fiscal year 2015, 9 of which were Senior Executive Service. We compared the occupational codes that VA identified for each of the 38 completed AVO transactions to a list of VA’s mission critical occupations that VA provided. Our analysis found that 10 of the 38 completed AVO transactions were for mission critical occupations, three of which were Senior Executive Service employees. The three Senior Executive Service employees were in three occupational codes: medical officer, contracting, and nurse. We also found that an additional 10 of the 38 completed AVO transactions were for core mission workers, which VA stated are occupations that perform the core work of an organization, but these occupations are not on the VA mission critical occupations list. These employees were in two occupational codes: program management and management and program analysis. The remaining 18 completed AVO transactions were in seven different occupational codes, which included health system administration, social science, and realty. However, VA is not tracking data on whether AVO improves recruitment and retention of employees. VA officials stated AVO has been most beneficial for the recruitment and retention of hard-to-fill Senior Executive Service positions, including positions in locations that were rural, had a high cost of living, or had physician or nursing shortages. A position could also be hard to fill because of turnover trends and availability of qualified talent. In addition, VA officials stated that a position’s classification as a mission critical skills occupation is one factor VA uses in determining whether or not AVO should be offered and that they used AVO as an incentive to move to other mission critical positions within the agency. However, VA is not tracking the data that would help it determine whether the use of AVO is improving recruitment and retention of employees specifically in hard-to-fill Senior Executive Service positions or mission critical skills occupations. Federal internal control standards suggest management should obtain reliable data that can be used for effective monitoring. It is also important to establish the necessary data to track a program’s effectiveness and to establish a baseline to measure the changes over time to assess the program in the future. In addition, reliable data are crucial for VA to manage its resources effectively. We have previously reported that flat or declining budgets will continue to necessitate workforce adjustments across government. However, VA stated it is not tracking data on whether the use of AVO improves recruitment and retention of employees because it does not have the resources or capabilities to do so. As VA continues to seek ways to address recruitment and retention challenges, collecting such data could be useful in identifying trends and options for targeting certain occupations or skill sets that may improve the agency’s use of home sales to support relocation. Without tracking these data, VA will be unable to determine whether the use of AVO is improving recruitment and retention. Employee relocation, including home sale assistance, can help agencies position skilled employees optimally and recruit and retain employees. VA’s Inspector General found instances of officials misusing AVO to relocate for their personal benefit rather than in the interest of the government. VA has taken actions to strengthen AVO’s internal controls, in part due to the Inspector General’s report. VA believes that using AVO is beneficial specifically for hard-to-fill Senior Executive Service positions and uses AVO as an incentive for mission critical skills occupations. However, VA does not track data that can help it determine whether use of AVO is improving retention and recruitment for these positions. As VA continues to seek ways to address recruitment and retention challenges, such data could be useful in identifying trends and options for targeting certain occupations or skill sets that may improve the agency’s use of home sales to support relocation. Without tracking these data, VA will be unable to determine whether the use of AVO has improved recruitment and retention. We recommend the Secretary of Veterans Affairs should track data that can help VA determine whether AVO improves recruitment and retention. We provided a draft of this report for review and comment to the Secretary of VA and the Acting Administrator of GSA. In its written comments, which are reproduced in appendix III, VA concurred with our recommendation and said it is working to improve reporting capabilities that will be beneficial in analyzing AVO data. GSA did not comment on the findings. VA and GSA also provided technical comments, which we incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of VA, the Acting Administrator of GSA, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. The objectives of this engagement were to review the administration of Appraised Value Offer (AVO) at the Department of Veterans Affairs (VA) and government-wide. Specifically, this report (1) describes federal agencies’ and VA’s use of AVO; (2) describes federal agencies’ key AVO internal controls, evaluations of whether AVO improved recruitment and retention of employees, and lessons learned; and (3) analyzes the extent to which VA has implemented additional internal controls for AVO since 2015 and has evaluated whether the use of AVO improved the recruitment and retention of employees. To address our objectives, we reviewed federal statutes and regulations related to relocation programs, conducted a literature review, and reviewed our prior work on relocation and mission-critical skills. We reviewed Title 5 of the U.S. Code related to relocation, including agency authority, roles, and responsibilities in administering AVO. We also reviewed the Federal Travel Regulation at Title 41 of the Code of Federal Regulations, and VA’s appropriations from fiscal years 2015 to 2017. We also conducted a literature review to find reports and articles about VA and federal use of AVO. We reviewed relevant documents from the General Services Administration (GSA) and Office of Personnel Management and interviewed officials from these agencies about their roles in agency use of relocation programs generally and on AVO specifically. We reviewed GSA’s guidance on agency relocation programs. We also interviewed GSA officials about their role managing the contract with relocation management companies that federal agencies can use and about providing agencies guidance and assistance in administering relocation programs. In addition, we interviewed Office of Personnel Management officials about their review and oversight role for agencies offering relocation programs. To describe how federal agencies and VA use AVO, we reviewed documents from VA and GSA and interviewed VA and GSA officials. We reviewed data on AVO transactions completed through GSA’s contract, which includes VA, in fiscal years 2012 to 2016. According to GSA officials, about 80 percent of federal agencies’ home sale relocation transactions occur through GSA’s contract with relocation management companies. GSA stated that the number of agencies that use its contract for home sales can differ from year to year. In addition, we reviewed VA’s data on completed AVO transactions in fiscal years 2012 to 2016. To assess the reliability of the GSA and VA data on completed AVO transactions, we interviewed GSA and VA officials and reviewed related documentation. We determined that the data were sufficiently reliable for the purposes of our objectives. To describe federal agencies’ key AVO internal controls, evaluations of effectiveness, and lessons learned, we developed a questionnaire. The questionnaire is reprinted in appendix II. To develop the internal controls section of the questionnaire (question 4), we used relevant federal internal control standards and the internal control weaknesses in the administration of relocation programs identified in the VA Inspector General’s 2015 report on misuse of relocation program funds. In addition, we reviewed other agencies’ inspector general reports on weaknesses in the administration of their relocation programs to identify key internal controls that would be relevant to the AVO process. We created a list of key controls relevant to AVO and asked the agencies to identify which internal controls they were using. We modified the list in response to feedback from pretests of our questionnaire. After we drafted the questionnaire, we conducted pre-tests on the phone with two officials from agencies that had used AVO but did not use GSA’s contract with relocation management companies, as well as an official from GSA who was familiar with how agencies manage their AVO utilizing the contract. We conducted these tests with officials familiar with the AVO process to check that (1) the questions were clear and unambiguous, (2) terminology was used correctly, (3) the questionnaire did not place an undue burden on agency officials, (4) the information could feasibly be obtained, and (5) the questionnaire was comprehensive and unbiased. We made changes to the content of the questionnaire after the three pre- tests, based on the feedback we received. We distributed the questionnaire we developed via email to the 28 agencies or components of agencies with completed home sale transactions through GSA’s contract in fiscal year 2015 or 2016. We did not include VA when distributing the questionnaire. We selected this set of agencies for distribution of the questionnaire to remain consistent with our reporting of federal agencies’ spending on AVO through GSA’s contract. We emailed the questionnaire to recipients as a Word attachment on January 9, 2017. We sent reminder emails to and called non-respondents. We also emailed secondary points of contact where available at non-responsive agencies. We closed the questionnaire on March 10, 2017. Twenty-four of 28 agencies completed the questionnaire, 20 of which had an operational AVO, which we interpreted to mean that AVO was being offered at the agency. Thus, we report on the 20 agencies’ responses to the questionnaire. We characterize the responses to the questionnaire as “most” when 12 to 19 agencies responded the same way. All questionnaire data were double key-entered into an electronic file in batches and were 100 percent verified. All data in the electronic file were verified again for completeness and accuracy. To assess the extent to which VA has implemented additional internal controls since 2015 and has evaluated whether the use of AVO has improved the recruitment and retention of employees, we analyzed documents from VA and interviewed VA officials. We assessed VA’s controls and evaluations using federal internal control standards. We reviewed VA human resources and financial policy documents about the administration of AVO with a focus on what changes had been made since fiscal year 2015. We interviewed VA officials who administer AVO about these changes and additional changes that are planned. We also reviewed the 2015 VA Inspector General report on relocation programs and a 2016 review of VA’s Permanent Change of Station program. We interviewed an official at the VA Inspector General’s office and other VA officials about the status of the recommendations. We conducted this performance audit from August 2016 to September 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. 1. Appraised Value Offer (AVO) programs purchase a relocating employee’s home based on the appraised value if the employee’s home is not sold within a specified time period determined by an agency. 2. Mission critical skills occupations are one or more of the following: staffing gap in which an agency has an insufficient number of individuals to complete its work; and/or a competency gap in which an agency has individuals without the appropriate skills, abilities, or behaviors to successfully perform the work. 3. Lessons learned are knowledge gained by both positive and negative experiences that, if applied, could result in a change. 1. Does your agency have an AVO program that is currently operational? No IF “NO”, PLEASE SKIP TO section 5, question 17 2. Does your agency use the AVO program as a recruitment or retention incentive for mission critical skills occupations, among others? No IF “NO”, PLEASE SKIP to question 4 3. Please provide two examples of mission critical skills occupations for which your agency has used the AVO program as a recruitment or retention incentive. 4. Does your agency have the following policies for its AVO program? (If need be, please review your agency’s policies.) 5. What process or policy changes, if any, has your agency made to its AVO program in fiscal year 2015 or after? Please describe. 6. Who at your agency approves the decision to offer the AVO program as a recruitment or retention incentive? Please provide a position, not the name of an individual, and the person’s office. For example, Chief of Relocation Incentives, Human Resources. 7. Has your agency examined whether the AVO program improved recruiting or retaining staff at any time during the fiscal years of 2012- 2016? Yes IF “YES”, PLEASE email us any documentation that your agency has on whether the AVO program improved recruiting or retaining staff, if possible No IF “NO”, PLEASE SKIP to question 9 8. Did your agency find that the AVO program improved recruiting or retaining staff? 9. For what uses has the AVO program been most beneficial (for example, in certain locations or occupations)? 10. Has your agency identified any lessons learned that could be applied to your agency’s AVO program? (Lessons learned are knowledge gained by both positive and negative experiences that, if applied, could result in a change.) Yes IF “YES”, PLEASE email us any documentation that your agency has on any lessons learned, if possible No IF “NO”, PLEASE SKIP to question 14 11. Please describe any lessons learned your agency has identified. 12. What actions, if any, is your agency planning to take in response to the lessons learned? 13. What actions, if any, has your agency taken in response to the lessons learned? 14. Has GSA provided your agency with assistance or guidance for your AVO program? (Assistance is customized for your agency’s needs, for example a phone call or an email in response to a question. Guidance is standardized and available to all agencies, for example through websites or conferences.) 15. What additional GSA assistance, if any, would be helpful for your agency to administer its AVO program? 16. What additional GSA guidance, if any, would be helpful for your agency to administer its AVO program? In addition to the contact named above, Signora May (Assistant Director), Maya Chakko, Jehan Chase, Ellen Grady, Gina Hoover, Jessica Mausner, Cindy Saunders, Robert Robinson, and Erik Shive made key contributions to this report. | Employee relocation is a critical tool to help agencies position skilled employees optimally and for workforce recruitment, retention, and development. Agencies can facilitate the sale of a relocating employee's home when the relocation of a specific employee to a different location is in the interest of the government. After a 2015 VA Inspector General report found that two VA employees abused AVO to relocate for their personal benefit, VA suspended AVO in October 2015 and reinstated it in fiscal year 2017. GAO was asked to review the administration of AVO at VA and government-wide. This report (1) describes federal agencies' and VA's use of AVO; (2) describes federal agencies' key AVO internal controls, evaluations, and lessons learned; and (3) analyzes the extent to which VA has implemented additional internal controls since 2015 for AVO and has evaluated whether AVO improved recruitment and retention. GAO analyzed agency documents and interviewed VA and GSA officials. GAO also distributed a questionnaire to 28 agencies or their components that had completed home sale transactions through GSA's contract in fiscal years 2015 or 2016. Twenty of these agencies responded that they had an operational AVO and provided information on the types of controls they use and any lessons learned. About 80 percent of federal agencies' home sale transactions to support employee relocations are through the contract that the General Services Administration (GSA) manages with relocation management companies. To support relocations, agencies can use an Appraised Value Offer (AVO). Under an AVO, the relocation management company buys a relocating employee's home for its appraised value if it cannot be sold during a stated period of time. From fiscal years 2012 to 2016, use of AVO varied for federal agencies, including the Department of Veterans Affairs (VA). For example, in fiscal year 2012, the federal agencies that used GSA's contract spent over $66 million on 936 homes and in fiscal year 2016 they spent over $42 million on 601 homes. In response to GAO's questionnaire (which was not sent to VA), most of the 20 agencies that were using AVO identified the following two types of critical internal controls as part of their AVO policies. First are transaction control activities, which are actions built directly into operational processes to support the entity in achieving its objectives and addressing related risks. For example, 18 agencies reported that the AVO approval process must be complete before payments are made. Second is assessing and responding to misconduct risks by considering how misuse of authority or position can be used for personal gain. For example, 19 agencies reported that their AVO had safeguards to prevent it from being used for the personal gain of employees. An agency could require an independent review of its permanent change of station program. While none of the 20 agencies reported they had evaluated whether AVO improved recruitment and retention of employees, 12 of the 20 agencies provided examples of how AVO had been beneficial. For example, four agencies noted the use of AVO had helped them recruit the most qualified employees or assisted with hard-to-fill positions. GSA officials also identified six good practices based on lessons learned from their role, which includes managing the relocation contract that they believe agencies should incorporate into their AVO. When GAO compared these good practices to VA's AVO process, it found that all of these good practices had been adopted by VA. For example, VA offers pre-decision counseling and VA employees work with the relocation company before their home is put on the market. Since fiscal year 2016, VA has strengthened the administration of AVO by implementing new policies that include internal controls, but does not track data on whether AVO improves recruitment and retention. For example, VA revised its policies to require approval prior to initiating recruitment efforts and that a relocating employee's participation cannot be approved by the employee's subordinates. VA officials stated AVO is beneficial for hard-to-fill Senior Executive Service positions and for mission critical skills occupations, however, VA does not track data to determine whether AVO improves the recruitment and retention of employees. VA officials stated the agency does not have the resources or capabilities to track such data. These data could be useful in identifying trends and options for targeting certain occupations or skill sets that may improve the agency's use of home sales to support relocation. Without tracking these data, VA will be unable to determine whether AVO has improved recruitment and retention. GAO recommends that VA track data to determine whether AVO improves recruitment and retention. VA concurred with the recommendation. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
According to FAA officials, FAA’s medical certification requirement was established to prevent or mitigate the effect of various medical conditions that present an undue risk to the safety of pilots, passengers, or others. While most general aviation accidents are attributed to pilot error involving a loss of aircraft control, according to information provided by NTSB, medical causes were a factor in approximately 2.5 percent of the accidents from 2008 through 2012. By ensuring that applicants meet medical standards, FAA aims to reduce the likelihood of incapacitation of a pilot due to a medical cause. Federal regulations establish three classes of medical certification that correspond to the types of operations that pilots perform. Airline transport pilots who serve as pilots in command of scheduled air-carrier operations must hold first-class medical certificates. Pilots who fly for compensation or hire generally hold second-class medical certificates. Private pilots hold third-class medical certificates. (See table 1.) Depending on their age and the class of medical certificate sought, pilots must renew their medical certificate periodically, from every 6 months to every 5 years (e.g., commercial pilots—generally those needing first- or second-class medical certificate—must have their medical certificate updated more frequently than private pilots). After obtaining a medical certificate, and between renewal periods, pilots are prohibited from performing pilot operations when they know or have reason to know of a medical deficiency that would make them unable to fulfill their pilot operation. In the fiscal year 2014 budget submission, FAA estimated that its Office of Aerospace Medicine would need about $56.1 million in funding—about 4.7 percent of the total Aviation Safety budget—to carry out its mission. To assist in the nearly 400,000 medical evaluations of pilots and new applicants each year, FAA designates medical certification authority to approximately 3,300 private physicians, or Aviation Medical Examiners (AMEs). The AMEs review applicants’ medical histories and perform physical examinations to ensure that applicants meet FAA’s medical standards and are medically fit to operate an aircraft at the time of their medical exam. Although AMEs are not FAA employees, they are trained in aviation medicine by the FAA and entrusted to make medical eligibility determinations for the majority of applicants, on behalf of the FAA. In order to become an AME and be authorized to administer medical exams, FAA requires AMEs to complete online courses in clinical aerospace physiology and medical certification standards and procedures before attending a one-week basic AME seminar. AMEs must also complete at least 10 pilot medical exams each year and a subsequent refresher courses every 3 years. All applicants for medical certificates and renewals follow a similar process. Applicants begin the medical certification process by completing Form 8500-8, Application for Airman Medical Certificate or Airman Medical & Student Pilot Certificate (medical application form) in MedXPress (online application system). For applicants with disqualifying medical conditions or for those who do not meet FAA’s medical standards, the AME must defer the applicant to FAA to authorize a special issuance. The special issuance process may require additional medical information and evaluations from, for example, a primary care physician or medical specialist. Also, a special issuance may be subject to operational limitations for safety reasons, or may be valid for a shorter time period than an unrestricted medical certificate. As a provision of the special issuance, FAA may authorize AMEs to make future medical determinations of the applicant—separate from the centralized special issuance process—under the AME Assisted Special Issuance (AASI) process. various outcomes. Alternatively, if FAA determines that an applicant’s medical condition is static and non- progressive and has found the applicant capable of performing pilot duties without endangering public safety, the FAA may grant a Statement of Demonstrated Ability (SODA) to the applicant, which does not expire and authorizes AMEs to make future medical determinations of the applicant, without requiring the applicant to go through the special issuance review process. According to FAA officials, pilot medical standards were developed to help manage safety risk. FAA’s current medical standards have been codified in federal regulation since March 19, 1996. The regulations set out 15 medical conditions that are specifically disqualifying. Medical conditions identified during an evaluation that are not specifically listed as disqualifying but do not meet the general medical standard regarding safe performance of duties and exercise of privileges, are also disqualifying under general medical standards, according to FAA. (See app. II for a summary of selected FAA medical standards.) According to FAA officials, the standards and the medical certification process were developed to manage the risk of an aircraft accident or incident by identifying applicants with medical conditions that could potentially incapacitate them in the flight environment or during critical take-off and landing periods. FAA takes steps designed to ensure that its medical policies and procedures are consistent with current medical and aeromedical practice, and that these steps result in periodic updates to its medical policies. The Federal Air Surgeon establishes medical policies and medical certification procedures that are published in internal guidance for FAA’s Office of Aerospace Medicine and for AMEs in the Guide for Aviation Medical Examiners (AME Guide). The agency uses several techniques to update policies: First, the Aeromedical Standards and Policies Branch develops policy recommendations for the Federal Air Surgeon, which address medical conditions, medication use, and medical procedures. According to FAA officials, medical policy review is a continuous process influenced by several factors, which include (1) announcements of significant new developments in the medical literature; (2) medical appeals to the Federal Air Surgeon; (3) announcements and alerts by the Food and Drug Administration; (4) inquiries by aviation stakeholder groups and pilot advocacy groups; (5) aircraft accidents or events; (6) inquiries by the Office of Aerospace Medicine personnel and AMEs; and (7) communications with international aviation authorities, and medical advocacy groups, among other things. Second, according to FAA officials, the agency refers dozens of individual cases annually for independent review by experts in a wide variety of medical specialties, such as cardiology, psychology, and neuropsychology. FAA officials stated that implicit to the process of reviewing each case is to consider changes to current policy based on current medical practice. FAA also periodically uses independent medical experts to evaluate its medical policies, particularly with regard to cardiovascular conditions, which were present in more than one-third of the applicants who received special issuances in 2012. In January 2013, for example, FAA hosted a cardiology roundtable to review FAA’s policies with regard to cardiovascular conditions and to suggest updates to the policies, if necessary. The roundtable’s suggested policy changes were presented to the Federal Air Surgeon, who approved several of them. However, FAA officials have said that they do not convene such roundtables frequently due to time and cost constraints. Third, the results of CAMI’s aerospace medical and human factors research have been used to inform changes to FAA guidance and policies. In particular, CAMI’s aerospace medical research focuses on the biomedical aspects of flight, including studies on aviation safety associated with biomedical, pharmacological, and toxicological issues. For example, CAMI’s research on sedating medication influenced guidance in this area. According to FAA officials, a review of accident investigation data showed that many pilots involved in accidents were using over the counter and prescription sedative medications. As a result, FAA, in coordination with the aviation industry, issued guidance extending the length of time a pilot should wait after using these medications and before operating an aircraft. A letter jointly signed by the FAA and all major aviation advocacy groups was sent to all pilots and published on the FAA website and in various public and private publications advising pilots to comply with the new guidance. Fourth, CAMI’s library allows research staff to collect and review academic journals on aviation medical issues, general medical research, engineering, management, and other general topics. CAMI researchers have also published approximately 1200 aerospace medicine technical reports on topics including, for example, pilot age, alcohol and substance abuse, fatigue, psychology, and vision. FAA’s policy branch periodically reviews this and other medical literature, which FAA officials say can also result in a possible revision. http://www.faa.gov/data_research/research/med_humanfacs/oamtechreports/. In addition, FAA has recently begun analyzing aviation accident information to develop a predictive model based on historic data of medical conditions that have been identified as contributing factors to aircraft accidents. The officials stated that they plan to use the model as a data-driven guide to help inform how they determine the relative risk of various medical conditions. FAA officials noted that the agency has begun this work as part of a broader Safety Management Systems (SMS) initiative that seeks to further enhance safety by shifting to a data-driven, risk-based oversight approach. All aerospace medical experts we interviewed generally agreed that FAA’s medical standards were appropriate, and most (16 of 20) said that Some the standards should be applied to commercial and private pilots.of these experts said that standards should apply equally to private pilots because they share airspace with commercial pilots or because private pilots typically do not fly with a copilot—an important safety feature for commercial flight operations. In addition, although some of the experts (7 of 20) suggested no changes to FAA’s policies, many of the experts (13 of 20) identified at least one medical standard for which they considered FAA’s policies to be either too restrictive or too permissive. A restrictive policy might lead FAA to deny certification to an applicant who may be sufficiently healthy to safely fly a plane, or may result in FAA requiring a more thorough medical evaluation than the experts considered necessary. A permissive policy, on the other hand, might lead FAA to certify an applicant with health issues that could impair his or her ability to safely fly a plane, or may result in FAA not completing as thorough a medical evaluation as the experts considered necessary. Although expert opinions varied regarding which standards were too permissive or restrictive, neurological issues were most commonly discussed by some (9 of 20) of the experts.noted that the FAA medical certification requirements for applicants who use antidepressants, including selective serotonin reuptake inhibitors (SSRI), are restrictive and onerous and may require an applicant not to fly for an extended period of time. A medical representative from the Aircraft Owners and Pilots Association (AOPA) said that FAA’s policies may require a pilot using antidepressants to undergo costly cognitive studies that were viewed as medically unnecessary for milder cases of depression. For example, some experts Alternately, some medical experts said that policies regarding cognitive functioning in aging pilots, traumatic head or brain injuries, and attention deficit disorders may be too permissive. An FAA official stated that the area of neurology is complex and has been somewhat difficult for AMCD due, in part, to variation in opinion as to how to assess cognitive function and when testing should be done. The agency hosted a neurology summit in 2010 that convened neurology experts to review FAA policies on neurological issues—including traumatic brain injury, migraine headaches, and neurocognitive testing—and resulted in recommendations that the Federal Air Surgeon adopted regarding migraine treatments, among other neurological conditions. Also, the Division Manager of AMCD said that they consult with neurologists, as needed, to review the application of certification policies regarding individual applicant cases. To a lesser extent, some (5 of 20) experts had mixed views on the policies for diabetes and medical conditions related to endocrine function. Of those, three experts thought that FAA’s current policies on diabetes might be too restrictive, for example, because the FAA has not kept pace with medical advances and treatment options currently available to pilots. One expert noted that some commercial pilots with insulin treated diabetes mellitus (ITDM) may be medically fit to fly a plane with a special issuance if they can demonstrate that their condition is stable, just as private pilots are allowed to do. In addition, representatives from the American Diabetes Association and a member of the Regional Airline Association stated that FAA’s policies for commercial pilots with ITDM have not kept current, when considering the advancements in medical treatment of ITDM and the redundancy of having a copilot and crew in commercial aircraft to reduce the risk associated with commercial pilots with ITDM. Conversely, two experts thought that FAA may be too permissive with regard to diabetes, citing, for example, concerns about the increase in diabetes among Americans, in general, and the potential for undiagnosed cases. FAA officials agreed that there have been improvements in the clinical care for diabetes and the Office of Aerospace Medicine has studied the safety and efficacy of new diabetes treatment over the past several years, including the risks associated with new medications and insulin formulations. However, according to FAA officials, independent consultants—including endocrinologists and diabetes experts—have told the FAA that the risk of incapacitation related to hypoglycemia has not changed regardless of advancements in treatment. All of the experts suggested ways FAA could ensure its medical standards are current, many of which were consistent with approaches FAA is already taking. For example, some of the experts (9 of 20) said FAA could review its medical standards at regular time intervals or as medical advances occur, and some (8 of 20) of the experts said FAA could review its medical standards based on evidence of the likelihood of each condition causing an accident. Some experts (5 of 20) specifically suggested FAA should convene a panel on neurology and mental health issues. FAA convened a panel on neurological issues in 2010. As previously mentioned, FAA is currently undertaking an agency-wide initiative—SMS—that seeks to further enhance safety by shifting to a data-driven, risk-based safety oversight approach. As part of this approach, FAA implemented the Conditions an AME Can Issue, or CACI, program in April 2013. The CACI program authorizes AMEs to issue medical certificates to applicants with relatively low risk medical conditions that had previously required a special issuance from the FAA. FAA developed the program by identifying medical conditions that, in most cases, did not pose a safety risk, based on FAA analysis of historic medical and accident data. Agency officials expect the program to allow more applicants to be certified at the time of their AME visit while freeing resources at FAA to focus on medically complex applicants with multiple conditions or medical conditions that may pose a greater risk to flight safety, such as applicants who have had coronary artery bypass surgery. Based on information provided by FAA, as of December, 31, 2011, approximately 19 percent of all pilots reported medical conditions that may now be evaluated by their AME as a result of the CACI program. Of those pilots, about one-third—or nearly 39,000 pilots—reported no additional medical conditions, making it more likely that in the future, they may be certified at the time of their AME visit, rather than through the special issuance process. Other medical conditions have been proposed for the CACI program but have not yet been approved by FAA officials. Most medical experts (18 of 20) we interviewed approved of the CACI program, and some (8 of 20) believed that FAA should continue to expand it to include additional medical conditions. Representatives of an industry association agreed and noted that by authorizing AMEs to make a greater number of medical certification decisions, AMCD officials could speed up the application process for more applicants. Medical conditions that were proposed but not yet approved for CACI, include, for example: carotid stenosis, bladder cancer, leukemia, and lymphoma. FAA authorization for a special issuance that they believe should be considered under the CACI program. Their suggestions included, for example, non-insulin-treated diabetes, which was a factor in about 17 percent of the special issuances in 2012; sleep apnea and other sleep disorders, which were a factor in about 11 percent of the special issuances in 2012; and various forms of cancer, which were a factor in about 10 percent of special issuances in 2012. FAA officials have begun to allow AMEs to make medical determinations for applicants with certain types of cancer under the CACI program and have said that they will evaluate other medical conditions to include in the CACI program in the future. Although neurological conditions (including migraines, head trauma, stroke, and seizures) accounted for approximately 4 percent of special issuances in 2012, some experts (5 of 20) thought, as mentioned above, that FAA should convene an expert panel to re-evaluate its policies in this area. Half of the experts we interviewed also said that FAA could evaluate its medical standards based on the relative risk of incapacitation associated with various medical conditions, assessed through greater use of data. That is, with a better understanding of the likelihood of each medical condition to cause a suddenly incapacitating event in flight— based on historic data of accidents and incidents—FAA could modify its risk threshold for various medical standards and policies to manage risk. As previously mentioned, FAA has begun to collect and analyze data that will help it develop a proactive approach to managing aviation medical risk; however, FAA officials told us that data from historic accidents and incidents can be difficult to obtain and link to medical causes. The officials also said that they would need to change how they code, or classify, the medical information they collect—and re-code medical information they already have—to more accurately classify medical conditions of applicants and, therefore, improve the reliability of their predictive model. Without more granular data collection on health conditions, officials said it is difficult for FAA to accurately determine the level of risk, associated with various medical conditions. In addition, officials at FAA and NTSB noted that data on medical causes of accidents and incidents are likely to be incomplete because not all accidents are investigated in the same way and medical causation can be difficult to prove in light of other contributing factors. For example, an official from NTSB explained that there are different levels of medical investigations performed after accidents, depending on factors like whether or not the pilot has survived, the condition of the aircraft or severity of the crash, and the number of people impacted. As of February 14, 2013, NTSB and FAA agreed to a memorandum of understanding (MOU) that will facilitate NTSB’s data sharing and record matching for aircraft accidents and incidents with CAMI. Although most medical certification determinations are made by one of the approximately 3,300 FAA-designated AMEs at the time of an applicant’s medical exam, approximately 10 percent of applications—or nearly 40,000 annually—are deferred to FAA for further medical evaluation if the applicant does not meet FAA’s medical standards or has a disqualifying medical condition. According to FAA officials, the 10 percent of applicants who are deferred requires a significant amount of resources from FAA’s medical certification division, which in recent years, has experienced a backlog of special issuance applications in need of review. As of February 2014, an FAA official estimated this backlog at about 17,500 applications. FAA has not met its internal goals for responding to individuals whose applications have been deferred. Specifically, FAA has set an internal goal of 30 working days to make a medical determination or to respond to an applicant with a request for further information. However, according to FAA data, the average time it takes FAA officials to make a medical determination or request further information from an applicant has increased over the past 6 fiscal years, taking an average of approximately 45 working days—or about 9 weeks—in fiscal year 2013, and more than 62 working days in December 2013. If FAA makes multiple requests for further information from an applicant, the special issuance process can take several months or longer. Officials from AOPA stated that some applicants for private pilot medical certificates discontinue the application process after an initial denial from the FAA because the applicants decide that the cost of extra medical evaluations and added time is too great to support what the applicant views as a recreational activity.that delays can also occur as a result of applicants who may take a long time to respond to an FAA request for further evaluation. According to AOPA, having information upfront would speed up the process by helping applicants understand FAA’s additional medical requirements for a special issuance. FAA has increasingly encouraged its Regional Flight Surgeons to become more actively involved in making medical determinations for applicants seeking a special issuance. However, an official from FAA noted FAA officials at AMCD stated that there are several reasons for the increased processing time for applicants requiring special issuances. For example, AMCD has faced a technical issue deploying the Document Imaging Workflow System (DIWS), a web-based computer system used by AMCD to process, prioritize, and track all medical certification applications. One AMCD official noted that delays in deployment of the system have decreased productivity of the AMCD to as low as just 25 percent of normal levels. In addition, officials cited multiple backlogs throughout the division, such as, the electrocardiogram (ECG) unit, which receives up to 400 ECGs each day, and the pathology-coding unit, which may require manual coding of medical conditions to feed information into DIWS. Part of the challenge, identified in FAA’s National Airspace Capital Investment Plan, is that the current medical certification systems are based on obsolete technology from the 1990s. Accordingly, technical working groups at AMCD have identified more than 50 problems and potential technological solutions to enhance their systems, including the special issuance processes, of which about 20 have been identified as high-priority, including improvements to the online application system, AMCS, DIWS, and the ECG transmittal and review process. For example, officials stated that updating DIWS to import and read electronic files would reduce the need to manually scan from paper documents, and providing AMEs or applicants limited access to DIWS so they can check the status of an application could reduce the number of calls AMCD receives at its call center. As of February 2014, FAA officials stated they received funding they requested in June 2013, to upgrade the ECG system from analog to digital—a process which they estimate will take about 11 months to complete. In addition, FAA has not established a timeline for implementing its broader set of technology enhancements, some of which may be less contingent on resource constraints. A timeline to guide the implementation of the highest-priority enhancements would help the agency take another step toward reducing the delays and bottlenecks in the special issuance process related to FAA’s technology issues. In addition to the proposed enhancements, the Office of Aerospace Medicine collaborated with the Volpe National Transportation Systems Center (Volpe Center), in 2013, to define broader challenges of the current medical certification process and develop a strategy to reengineer the existing business processes, including the online medical-certification system and its supporting information-technology infrastructure. Officials from the Office of Aerospace Medicine have said that their effort with the Volpe Center will ultimately inform their plan to replace FAA’s current medical information systems with the Aerospace Medicine Safety Information System (AMSIS), which the agency plans to begin developing in fiscal year 2015. FAA officials stated that they envision several long- term positive changes that may result from AMSIS—including redesigning the online application system and form, providing applicants with information on actions to complete before they meet with their AME, and a more transparent special issuance process with the capacity for applicants to check the status of their applications. However, FAA officials have also identified several challenges to implementing AMSIS, including working within the confines of legal and regulatory requirements, protecting sensitive information, and obtaining the estimated $50 million needed to fund the system. One of FAA’s main tools to communicate its medical standards directly to applicants, and to solicit medical information from them, is its online medical application system. While FAA also offers training and produces pamphlets, videos, and other educational material for AMEs and pilots, the online medical application system is used by all applicants to apply for a medical certificate. (See app. III for FAA’s training programs and other communication tools for AMEs and pilots). The system includes information such as the online medical-application form and instructions used by applicants to submit medical information to their AME and to FAA, and a link to the AME Guide, which contains pertinent information and guidance regarding regulations, examination procedures, and protocols needed to perform the duties and responsibilities of an AME. We compared the online application system with select guidelines related to content, navigation, and design that are considered good practices by Based on our evaluation and discussion with experts, we Usability.gov.identified areas in which FAA might enhance the usability of the online application system by (1) providing useful information directly to applicants, and (2) using links to improve how applicants navigate through the application system. Providing Additional Useful Information Directly to Applicants: According to Usability.gov, a good practice in website design includes providing useful and relevant information that is easy to access and use. Some experts (7 of 20), including four who were also AMEs, said that applicants may be unsure about medical requirements and documentation. Representatives of two aviation medical associations also said a lack of clarity can lead to delays in processing the medical certification if applicants learn during their medical examination that they must obtain additional test results or documentation from their primary care physician. Some medical experts (4 of 20) said that technological improvements would be helpful. For example, FAA could develop a Web page on its website or within the online application system with more information for applicants. In addition, two pilot associations stated that a specific Web page or website for applicants with links to information on various medical conditions, their risks to flight safety, and additional medical evaluations that might be needed for applicants with those conditions would be helpful. The online application system currently contains a link to the AME Guide; however, applicants may find the 334-page AME Guide—written for aviation medical examiners—difficult to navigate and understand and therefore, may be unable to find information about specific documentation and additional medical evaluations they may need. FAA officials in the medical certification division said that providing documentation requirements to applicants could reduce certification delays, AME errors, and the number of phone calls to AMCD’s medical certification call center because the applicants would know what additional evaluations or documents they should get from their primary care physician before they visit their AME for a medical exam. Similarly, the FAA officials noted that applicants may not recall information they had previously reported in prior medical certificate evaluations or may not disclose their complete medical history when they see a new AME. NTSB officials stated that the AME cannot see information about any previous applications and knows only what the pilot has reported on their current application. This means that the applicant has to recall all of his or her past medical history each time they apply for a medical certificate. Additionally, according to the NTSB officials, it would be useful for the pilot to access previously reported information and update only what has changed since their previous exam. As part of the more than 50 technological solutions discussed earlier that FAA has identified to enhance the special issuance process, the agency has proposed providing access to worksheets which specify required medical documentation and providing access to previously reported medical data to applicants and AMEs. FAA officials stated that these issues, if addressed, would facilitate information flow between the applicant, the AME and FAA and allow AMCD officials to more efficiently do their work. Additionally, some experts (9 of 20) said that it would be helpful to applicants and treating physicians if FAA posted a list of banned medications. In a couple experts’ view, without a public list of banned medications, applicants may not disclose their medical treatment regimen to FAA out of fear of losing or not receiving their certification. NTSB recommended in 2000 that DOT develop a list of approved medications and/or classes of medications that may be safely used when operating a vehicle; however, DOT—including FAA—did not implement the recommendation because, in DOT’s view, a list of approved medications would be difficult to maintain and would be a liability for the transportation industry if the Department approved a medication that later caused an accident. Officials from AOPA told us that the association provides an unofficial list of approved and banned medications to its members but believes that this information should be made public and provided by FAA. However, FAA states in its AME guide that maintaining a published list of approved medications would not contribute to aviation safety because it doesn’t address the underlying medical condition being treated. Instead, FAA’s current policy prohibits AMEs from issuing medical certificates to applicants using medications that have not been on the market for at least one year after approval by the Food and Drug Administration (FDA), and FAA has recently updated its AME Guide, to include a “Do Not Issue—Do Not Fly” list of several general classes of medication and some specific The “Do Not Issue” list pharmaceuticals and therapeutic medications. names medications that are banned—meaning the AME should not issue a medical certificate without clearance from FAA—and the “Do Not Fly” list names medications that the pilot should not use for a specified period of time before or during flight, including sleep aids and some allergy medications. FAA officials said that the “Do Not Issue—Do Not Fly” list is intended to be a “living document” that they will revisit periodically. NTSB officials suggested that it would be helpful if medications that an applicant discloses on the medical application form could be automatically checked against the “Do Not Issue—Do Not Fly” list to notify their AME of the applicant’s use of a medication on the list. http://www.faa.gov/about/office_org/headquarters_offices/avs/offices/aam/ame/guide/ph arm/dni_dnf/. Easier Website Navigation: Navigation is the means by which users get from page to page on a website to find and access information effectively and efficiently. According to Usability.gov, a good practice related to navigability is to use a clickable list of contents, thus minimizing scrolling. The Pilot’s Bill of Rights Notification and Terms of Service Agreement— which contains a statement advising the applicant that responses may be used as evidence against the applicant, a liability disclaimer, a statement of privacy, and a Paperwork Reduction Act statement among other statements—requires the user to scroll through what equates to nearly 10 pages of text (2,441 words over 417 lines of text), viewable through a small window that shows approximately 10 to 12 words across and four lines down at a time (see fig. 2). FAA might enhance the visibility of this information and help applicants better understand what they are agreeing to, if it created a larger window with hyperlinks to help the reader navigate through various sections of the notification and agreement. Similarly, the question and answer page for applicants could be enhanced by including clickable links between the questions and answers to allow readers to more easily find answers of interest to them. Another good practice, according to Usability.gov, is to design websites for popular operating systems and common browsers while also accounting for differences. According to a notification on the online application system’s log-in screen, applicants are advised to use only Internet Explorer to access the system. The system functions inconsistently across other browsers such as Google Chrome, Mozilla Firefox, and Apple Safari. For example, links to from the medical application form to its instructions do not work for Firefox or Google Chrome; instead, they lead the applicant back to the log-in page, causing any unsaved information to be lost. As described in the previous section, FAA officials at the medical certification division identified technological problems and potential solutions to enhance the online application system, but as of April 2014, no changes have been made. For example, the officials observed that some applicants enter the date in the wrong format, switching the order of day and month (DD/MM/YYYY, as opposed to MM/DD/YYYY), which can lead to problems when the AME imports the application. As a result, FAA officials proposed using drop-down boxes—with the name or abbreviation of each month, followed by the day, and the year—to collect date information. This proposed solution is consistent with a good practice— anticipates typical user errors—highlighted by Usability.gov. Additionally, the officials noted that it is not uncommon for an applicant to be logged out of their session due to inactivity, resulting in a loss of data entered during the session. To address this, FAA proposed that the online application system incorporate an auto-save feature that would be activated prior to the session expiring—consistent with Usability.gov guidelines—to warn users that a Web session may be expiring after inactivity to prevent users from losing information they entered into the online application system. In addition to these enhancements, FAA collects some information from applicants and AMEs regarding their experience with the application process. For example, FAA operates a 24-hour call center to answer technical questions that applicants, and AMEs may have about the online application system throughout the application process. FAA also has surveyed AMEs and pilots to collect information about their experience with the medical certification process. The Plain Writing Act of 2010 requires federal agencies, including FAA, to write specified types of new or substantially revised publications, forms, and publicly distributed documents in a “clear, concise, well-organized” manner. Several years before the Plain Writing Act of 2010—in 2003— FAA issued Writing Standards to improve the clarity of its communication. The Writing Standards include guidance for anyone who writes or reviews FAA documents intended for internal or external distribution. FAA has continued to make efforts in recent years to improve its employees’ understanding of plain language and how to incorporate it in written documents. FAA’s Plain Language Program in the Office of Communications trains employees and supports Plainlanguage.gov, a website devoted to improving communication from the Federal government to the public. Although plain writing is only required for new or substantially changed government documents, and is therefore not required for the current medical application form, the goal of plain writing is to help readers find the information they need, understand what they find, and use it to meet their needs. In regard to the medical certification process, this would include helping applicants understand each question and more accurately complete the application form in the way that FAA intended. In addition, stakeholders from two pilot associations were concerned that unclear questions on the medical application form could lead to incomplete or inaccurate responses, which they said could also lead to applicants’ being accused of misrepresenting themselves or falsifying information on the application form—an offense punishable by a fine up to $250,000 and imprisonment up to 5 years and may also result in a suspension or revocation or all pilot and medical certificates. More specifically, FAA’s Writing Standards also recommend using active voice and active verbs to emphasize the doer of the action. Our analysis of FAA’s medical application form and instructions showed that in some cases, FAA used passive voice although active voice would make the statements clearer. According to FAA’s writing standards, because the active voice emphasizes the doer of an action, it is usually briefer, clearer, and more emphatic than the passive voice. For example, on the medical application form the current statement, “Intentional falsification may result in federal criminal prosecution,” may be clearer to the applicant if stated, “If you intentionally falsify your responses, you may be prosecuted for a federal crime,” or a similar, more direct way of notifying the applicant. However, FAA officials noted that any re-wording of legal warnings or disclaimers must be approved by legal counsel. We also asked the medical experts to review the online application form. In response, many medical experts (12 of 20) we interviewed stated that certain questions can be confusing or too broad. For example, some experts have said that terms like “frequent,” “abnormal,” or “medication” aren’t clearly defined and therefore certain questions could generate inaccurate responses. For example, many experts (15 of 20) said that question 17a, on medication use, was unclear because, among other reasons, the reader may not know whether supplements or herbal medicines should be included. Some medical experts (7 of 20) also suggested adding items to question 18, about medical history, for areas such as cancer and sleep apnea. In 2009, NTSB recommended that FAA modify its medical application form to elicit specific information about risk (See app. factors or any previous diagnosis of obstructive sleep apnea.IV for a copy of the medical application form.) Many of the medical experts we consulted further suggested simplifying one question on the form; this question has also been examined by FAA officials. Specifically, the question on the form pertains to an applicant’s arrests or convictions. For example, many experts (13 of 20) suggested simplifying the question. FAA’s writing guidance suggests shortening sentence length to present information clearly or using bullets or active voice. In addition, FAA officials from the medical certification division used a computer program to analyze the readability of the question and discovered that an applicant would need more than 20 years of education to understand it. According to FAA officials, the agency can make changes to the medical application form for various reasons, including, for example: a response to findings or a recommendation made in a report by NTSB or by the Department of Transportation Inspector General, or a change in medical practices, for example, resulting from advancements in medicine. Since 1990, FAA revised the application form several times, to add or remove questions, change time frames related to the questions, or to clarify the questions, among other types of changes. When FAA announced in the Federal Register that it would replace its paper application form with an online application system, the agency said that the online application system would allow it to make and implement any needed or mandated changes to the application form in a timelier manner, resulting in a more dynamic form. However, agency officials noted that while they maintain a list of questions on the application form that pose problems for applicants, they do not make frequent changes, in part, because of the time and resources needed to complete a lengthy public comment and Office of Management and Budget (OMB) approval FAA officials also processes which, they say, can take up to two years.said that the Office of Aerospace Medicine must balance “plain language” with the requirements levied by FAA’s General Counsel to make sure that the wording is legally correct and enforceable. While it will take time and resources to improve the clarity FAA’s medical application form, if left unchanged, the accuracy and completeness of the medical information provided by applicants may not be improved. Aerospace medical experts we interviewed generally agreed that FAA’s current medical standards are appropriate and supported FAA’s recent effort to authorize its AMEs to certify a greater number of applicants by using a data-driven approach to assessing risk through the CACI program. Expanding the CACI program, as some experts suggested, could reduce the time it takes for applicants with lower risk conditions to become medically certified and, more importantly, allow FAA to prioritize the use of its constrained resources for medical determinations for applicants with the highest-risk medical conditions. FAA has identified approximately 50 potential technological enhancements to its computer systems that support its certification process, including adding new functionality to facilitate the process and provide applicants with more information about medical requirements. According to FAA officials, these enhancements would potentially reduce the workload at the medical certification division. Although FAA intends to eventually replace its current medical-certification computer systems with a new Aerospace Medicine Safety Information System (AMSIS), temporary enhancements are expected to help FAA reduce the delays and bottlenecks currently posing challenges to the agency. FAA has not established a timeline for implementing its broader set of 50 proposed technological enhancements, some of which may be less expensive than others. A timeline to guide the implementation of the highest-priority enhancements would help the agency take another step toward reducing the delays and bottlenecks related to FAA’s technology limitations. The online-application system and form that FAA uses to communicate directly to applicants contain confusing questions and instructions that do not meet FAA’s own plain language guidance. In addition, broken links and other navigability issues make the website difficult to follow. Efforts to provide applicants with useful and relevant information and improve the clarity of the questions and instructions contained in the online application system and form could allow FAA to more clearly communicate medical requirements to applicants. These improvements could not only aid an applicant’s understanding of the medical standards and requirements, but also may result in more accurate and complete information provided by applicants to better inform FAA’s certification decisions. To improve the applicants’ understanding of the medical standards and the information required to complete FAA’s medical certification process, the Secretary of Transportation should direct the Administrator of FAA to 1. develop a timeline for implementing the highest-priority technological improvements to the internal-computer systems that support the medical-certification process, and 2. enhance the online medical-application system by clarifying instructions and questions on the medical application form and providing useful information to applicants. We provided the Department of Transportation with a draft of this report for review and comment. DOT provided technical comments, which we incorporated into the report as appropriate, and DOT agreed to consider the recommendations. We are sending copies of this report to the Department of Transportation, the appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Contact information and major contributors to this report are listed in appendix V. The objectives of this report are to provide information on (1) FAA’s medical standards and policies and certification processes, along with medical experts’ views on them, and (2) steps that could be taken to promote private pilot applicants’ understanding of FAA’s medical requirements, including potential revisions to the medical application form. To meet these objectives, we reviewed pertinent statutes, regulations, and FAA documents regarding its pilot medical certification process, standards and application form. We also reviewed academic, trade and industry articles, government reports, and other relevant literature. We interviewed officials from FAA and the National Transportation Safety Board (NTSB), and other stakeholders in the pilot medical certification process, including officials representing government advocacy, medical and legal issues within the Aircraft Owners and Pilots Association (AOPA) and the Experimental Aircraft Association (EAA), the Aeromedical Advisor to the Air Line Pilots Association (ALPA), attorneys who assist pilots through the medical certification process, and representatives from the American Diabetes Association. responses from the President and representatives from three member airlines of the Regional Airline Association, the Executive Director of the Aerospace Medical Association (AsMA), and the President and physician members of the Civil Aviation Medical Association (CAMA). We also visited the Civil Aerospace Medical Institute (CAMI) in Oklahoma City to interview representatives of FAA’s Aerospace Medical Certification Division (AMCD), and we attended a training seminar for Aviation Medical Examiners (AME). To obtain expert opinions on FAA’s medical standards, we collaborated with the National Academies’ Institute of Medicine to identify aviation medical experts. We provided the Institute of Medicine with criteria and considerations for identifying experts, including (1) type and depth of experience, including recognition in the aerospace medicine professional community and relevance of any published work, (2) employment history and professional affiliations, including any potential conflicts of interest, and (3) other relevant experts’ recommendations. We also contacted the American College of Cardiology and the American Academy of Neurology to solicit their views but they did not reply for an interview. From the list of 24 experts identified by the National Academies, we added 3 experts recommended to us and omitted 7 due to their unavailability, their concern that they may not have the expertise to respond to our questions, or their stated conflicts of interest. We ended up with a total of 20 aviation medical experts who represented private, public, and academic institutions. Fourteen of the experts are board certified by at least one of the American Board of Medical Specialties member boards, including 9 who are board certified in aerospace medicine. Eight of the 20 medical experts we interviewed are AMEs for the FAA, and 16 are pilots or have had pilot experience in the past. Two experts are from aviation authorities in Australia and New Zealand, and a third expert was from the United Kingdom. Each expert verified that they had no conflicts of interest in participating in our study. We conducted semi-structured interviews by telephone with the experts in August and September 2013 to solicit their views on FAA’s medical standards and qualification policies, the medical application form, and FAA’s communication with AMEs and pilot applicants. We also asked general questions about aviation medical policies, followed by specific questions about private pilots, where applicable. We provided all medical experts with relevant background information prior to our interview, and we provided the option to bypass questions if they believed they were unqualified to respond in a professional capacity. Prior to conducting the interviews, we pretested the interview questions with three aviation medical experts (two were AMEs and one was also a pilot). We conducted pretests to make sure that the questions were clear and unbiased and that they did not place an undue burden on respondents. We made appropriate revisions to the content and format of the questionnaire after the pretests. Each of the 20 interviews was administered by one analyst and notes were taken by another. Those interview summaries were then evaluated to identify similar responses among the experts and to develop our findings. The analysis was conducted in two steps. In the first step, two analysts developed a code book to guide how they will analyze the expert responses. In the second step, one analyst coded each transcript of expert responses, and then a second analyst verified those codes. Any coding discrepancies were resolved by both analysts agreeing on what the codes should be. We examined responses to determine if there were systematic differences in responses between experts who were and were not pilots and between experts who were and were not AMEs. Because we found no significant differences between the pilot and AME groups, we reported the results for the experts as a whole rather than by the pilot or AME subgroups. We used indefinite quantifiers throughout the report—”few” (2-3 experts); “some” (4-9 experts); “half” (10); “many” (11-15 experts); and, “most” (16- 19 experts)—to inform the reader of the approximate quantity of medical experts that agreed with a particular statement. We only reported on issues raised by at least two experts. We interviewed individuals with broad aerospace-medicine expertise to provide their expert opinion on FAA’s medical standards and qualification policies. While the experts provided their opinions on some specific standards, we do not believe that these opinions alone provide sufficient evidence to recommend any specific changes to FAA medical standards and policies. Rather, the information from these interviews provides us with an overall expert assessment of FAA’s medical standards, policies, and practices. The results of our interviews represent opinions among the experts we interviewed but cannot be generalized to the larger population of aviation medical experts. See table 2, below, for a list of medical experts we interviewed. In addition to asking medical experts and other stakeholders about their view of FAA’s communication of its medical certification requirements, we reviewed MedXPress.faa.gov (online application system) used by pilots to obtain a medical certificate. We reviewed the Pilots Bill of Rights Notification and Terms of Service Agreement, Form 8500-8 (medical application form) and instructions, and links within the online application system, evaluating that information against federal government website- usability guidelines and against FAA’s plain language guidelines. We evaluated the online application system based on the following criteria (1) content—whether the website contained relevant and appropriate information users need—and (2) navigation—how easily users can find and access information on the site and move from one webpage to another, focusing on, for example, the clickable links within a website and limited reliance on scrolling. In addition, we reviewed various other website usability resources and criteria, including Usability.gov, to understand the key practices for making websites easy to use and helpful. We evaluated the medical application form and its instructions based on criteria established by FAA’s Office of Communications, including its Plain Language Tool Kit and its Writing Standards. These criteria include (1) writing principles—for example, whether the document is appropriate for its audience, its content is well organized, and it uses active voice, clear pronouns, and short sentences and paragraphs—and, (2) formatting principles—for example, whether the document layout and use of headers and blank space conform with best practices to clearly present information to the reader. We conducted this performance audit from January 2013 through April 2014, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Second-Class Commercial Pilot Every year, regardless of age. First-Class Airline Transport Pilot Every 6 months if > 40 years old. Every year if < 40 years old. 20/20 or better in each eye separately, with or without correction. Third-Class Private Pilot Every 2 years if > 40 years old. Every 5 years if < 40 years old. 20/40 or better in each eye separately, with or without correction. 20/40 or better in each eye separately (Snellen equivalent), with or without correction, as measured at 16 inches. 20/40 or better in each eye separately (Snellen equivalent), with or without correction at age 50 and over, as measured at 32 inches. No requirement. Ability to perceive those colors necessary for safe performance of airman duties. Demonstrate hearing of an average conversational voice in a quiet room, using both ears at 6 feet, with the back turned to the examiner or pass one of the audiometric tests below. Audiometric speech discrimination test: Score at least 70% reception in one ear. Pure tone audiometric test. Unaided, with thresholds no worse than: No ear disease or condition manifested by, or that may reasonably be expected to be maintained by, vertigo or a disturbance of speech or equilibrium. Not disqualifying per se. Used to determine cardiac system status and responsiveness. No specified values stated in the standards. The current guideline maximum value is 155/95. Not routinely required. No diagnosis of psychosis, or bipolar disorder, or severe personality disorders. A diagnosis or medical history of “substance dependence” is disqualifying unless there is established clinical evidence, satisfactory to the Federal Air Surgeon, of recovery, including sustained total abstinence from the substance(s) for not less than the preceding 2 years. A history of “substance abuse” within the preceding 2 years is disqualifying. “Substance” includes alcohol and other drugs (i.e., PCP, sedatives and hynoptics, anxiolytics, marijuana, cocaine, opioids, amphetamines, hallucinogens, and other psychoactive drugs or chemicals). Unless otherwise directed by the FAA, the Examiner must deny or defer if the applicant has a history of (1) diabetes mellitus requiring hypoglycemic medication; (2) Angina pectoris; (3) Coronary heart disease that has been treated or, if untreated, that has been symptomatic or clinically significant; (4) Myocardial infarction; (5) Cardiac valve replacement; (6) Permanent cardiac pacemaker; (7) Heart replacement; (8) Psychosis; (9) Bipolar disorder; (10) Personality disorder that is severe enough to have repeatedly manifested itself by overt acts; (11) Substance dependence; (12) Substance abuse; (13) Epilepsy; (14) Disturbance of consciousness and without satisfactory explanation of cause, and (15) Transient loss of control of nervous system function(s) without satisfactory explanation of cause. AME Training Program/ Communication tool Description Clinical Aerospace Physiology Review for Aviation Medical Examiners (CAPAME) course Medical Certification Standards and Procedures Training (MCSPT) Prospective AMEs must complete these online courses as a prerequisite to becoming an AME. Prospective AMEs generally must attend this one-week seminar to be designated as an AME. Practicing AMEs must complete refresher training every three years to maintain their designation as an AME. AMEs generally fulfill this requirement by either attending an AME Refresher Seminar; or, completing the online MAMERC course in lieu of attending an AME theme seminar. This course can be used as a substitute for a theme seminar on alternate 3-year cycles, which extends the time between theme seminar attendance to six years. In addition to the AME training and continued professional refresher courses, AMEs generally maintain a proficiency requirement of at least 10 exams per year. According to the Federal Air Surgeon, FAA policies go into effect when they are updated in the Guide for Aviation Medical Examiners, available online. Published quarterly for aviation medical examiners and others interested in aviation safety and aviation medicine. The Bulletin is prepared by the FAA’s Civil Aerospace Medical Institute, with policy guidance and support from the Office of Aerospace Medicine. Aerospace Medical Certification Subsystem (AMCS) E-mail notifications are sent to AMEs and their staff through AMCS. AMCS support is available by phone, (405) 954-3238, or e-mail, [email protected]. FAA TV, http://www.faa.gov/tv, is a central repository for FAA videos related to pilot medical requirements, among other topics. For example, FAA has produced two MedXPress videos: http://www.faa.gov/tv/?mediaId=554 or http://www.faa.gov/tv/?mediaId=634 FAA also posts videos on its YouTube page http://www.youtube.com/user/FAAnews/videos FAA also uses Facebook and Twitter to communicate directly with pilots and others who choose to follow FAA through social media. Bimonthly publications promote aviation safety by discussing current technical, regulatory, and procedural aspects affecting the safe operation and maintenance of aircraft. FAA pilot safety brochures provide essential information to pilots regarding potential physiological challenges of the aviation environment so pilots may manage the challenges to ensure flight safety. Brochure topics include: Alcohol and Flying, Medications, Spatial Disorientation, Hearing and Noise, Hypoxia, Pilot Vision, Seat Belts and Shoulder Harnesses, Sleep Apnea, Smoke, Sunglasses for Pilots, Deep Vein Thrombosis and Travel, and Carbon Monoxide, among other topics. MedXPress support is available for pilots by phone, (877) 287-6731, or e-mail, 9- [email protected], 24 hours each day. Appendix IV: FAA Form 8500-8 (Medical Application Form) In addition to the contact named above, the following individuals also made important contributions to this report: Susan Zimmerman, Assistant Director; Colin Fallon; Geoffrey Hamilton; John Healey; Linda Kohn; Jill Lacey; Maren McAvoy; and Sara Ann Moessbauer. | FAA developed its medical standards and pilot's medical-certification process to identify pilot applicants with medical conditions that may pose a risk to flight safety. The Pilot's Bill of Rights (P.L. 112-153) mandated GAO to assess FAA's medical certification standards, process, and forms. This report addresses: (1) FAA's medical standards, policies, and certification processes, along with medical experts' views on them, and (2) steps that FAA could take to promote private pilots' understanding of its medical requirements. GAO reviewed statutes, regulations, FAA documents, and interviewed officials from FAA, NTSB, pilot associations, and 20 aviation medical experts primarily identified by the National Academies' Institute of Medicine. Experts were selected based on their type and depth of experience, including recognition in the aerospace-medicine professional community. GAO also interviewed FAA's medical certification division and evaluated the usability of FAA's online application system and the clarity of its application form against federal writing guidelines and best practices in website usability. Aerospace medical experts GAO interviewed generally agreed that the Federal Aviation Administration's (FAA) medical standards are appropriate and supported FAA's recent data-driven efforts to improve its pilot medical-certification process. Each year, about 400,000 candidates apply for a pilot's medical certificate and complete a medical exam to determine whether they meet FAA's medical standards. From 2008 through 2012, on average, about 90 percent of applicants have been medically certified by an FAA-designated aviation medical examiner (AME) at the time of their medical exam or by a Regional Flight Surgeon. Of the remaining applicants, about 8.5 percent have received a special issuance medical certificate (special issuance) after providing additional medical information to FAA. Approximately 1.2 percent were not medically certified to fly. According to an industry association, the special issuance process adds time and costs to the application process, in part, because applicants might not understand what additional medical information they need to provide to FAA. Officials from FAA's medical certification division have said that technological problems with the aging computer systems that support the medical certification process have contributed to delays in the special issuance process. FAA's medical certification division has identified about 50 potential technological enhancements to its internal computer systems that support the medical certification process, of which about 20 have been identified as high priority, but the division has not yet implemented them or developed a timeline to do so. By developing a timeline to implement the highest-priority enhancements, FAA would take another step toward expediting the certification process for many applicants hoping to obtain a special issuance. FAA recently established a datadriven process using historic medical and accident data that authorizes AMEs to certify a greater number of applicants with medical conditions who had previously required a special issuance. Officials expect this effort to allow more applicants to be certified at the time of their AME visit and to free resources at FAA to focus on applicants with higher-risk medical conditions. GAO's analysis and medical experts' opinions indicate that FAA could improve its communication with applicants by making its online application system--part of FAA's internal computer systems discussed above--more user-friendly and improving the clarity of the medical application form. Specifically, GAO found that the online application system requires applicants to scroll through a lengthy terms-of-service agreement and does not provide clear instructions, and that the application form contained unclear questions and terms that could be misinterpreted by the applicant. FAA could enhance its online application system by using links to improve navigability of the system and providing information that is more useful to applicants--for example, links to information about the risk that specific medical conditions pose to flight safety and any additional medical information applicants with those conditions would need to provide to FAA. FAA could also improve the clarity of its medical application form by incorporating guidelines established in FAA's Writing Standards, including shorter sentences and paragraphs, active voice, and clear terms and questions. These clarifications could not only aid an applicant's understanding of the medical standards and requirements, but also may result in more accurate and complete information provided by applicants to better inform FAA's certification decisions. GAO recommends that FAA (1) develop a timeline for implementing high-priority technological improvements to the internal computer systems that support the medical certification process, and (2) enhance the online medical-application system by clarifying instructions and questions on the form and providing useful information. The Department of Transportation agreed to consider the recommendations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Six major domestic airlines have proposed alliances in 1998. These alliances are significant in scope but vary in extent, and their details are still emerging. In sum, the three alliances would control about 70 percent of domestic traffic, as measured by the number of passengers that board a plane—enplanements. Table 1 summarizes the size and characteristics of the proposed alliances. A key characteristic of two of the alliances is extensive code-sharing. According to officials at DOJ and DOT, code-sharing agreements are forms of corporate integration that fall between outright mergers, which involve equity ownership, and traditional arm’s length agreements between airlines about such things as how they will handle tickets and baggage. Continental Airlines and Northwest Airlines announced in January 1998 that they were entering into a “strategic global alliance” that would connect the two airlines’ route systems. Under this alliance, the airlines plan to code-share flights and include each of their respective code-share partners, such as America West, Alaska Airlines, and KLM Royal Dutch Airlines. In addition, the airlines will establish reciprocity between their frequent flier programs, which means that travelers who belong to both programs will be able to combine miles from both to claim an award on either airline. The airlines will also undertake other cooperative activities, including coordinating flight schedules and marketing. Certain aspects of the alliance agreement are contingent on the successful conclusion of negotiations with Northwest’s pilots’ union. Northwest plans to buy an equity share in Continental and place it in a voting trust. In April 1998, United Airlines and Delta Air Lines announced a tentative agreement to enter into a global alliance. The United-Delta alliance would be the largest alliance in terms of its market share of passengers, but it would have no exchange of equity. Under the terms of the agreement, the two airlines plan to engage in code-sharing arrangements, reciprocal frequent flier programs, and other areas of marketing cooperation. The alliance will be implemented on the airlines’ domestic routes and expanded internationally only after obtaining the concurrence of the airlines’ alliance partners and approval by governments, where applicable. Code-sharing on flights to Europe is not currently part of the plan for this alliance because of complex governmental and alliance issues, particularly linking two current competitors—Lufthansa and SwissAir—under the same alliance. According to airline officials, the code-sharing planned for the U.S. domestic markets will probably not occur before early 1999 and is contingent on the approval of pilots at both airlines. Also in April 1998, American Airlines and US Airways announced that they had agreed on a marketing relationship that would give the customers of each airline access to the other airline’s frequent flier program. In addition, the two airlines agreed to allow reciprocal access to all domestic and international club facilities and are working to make final arrangements to cooperate in other areas. The airlines expect to implement the linkages between the two frequent flier programs by late summer 1998. The alliance will also include code-sharing by the airlines’ regional partners, American Eagle and US Airways Express, and may seek broader code-sharing, pending pilots’ approval, at a later date. The chief executive officers of both airlines have also announced that if the other two alliances are implemented, they would seek a code-sharing arrangement as a competitive response. DOJ and DOT have somewhat different statutory authorities to review the proposed alliances. In 1989, DOT’s long-standing authority to review domestic mergers and alliances transferred to DOJ. DOJ’s Antitrust Division uses its authority under the Clayton, Sherman, and Hart-Scott-Rodino acts to examine domestic alliances in which a change in ownership or code-sharing occurs. If DOJ believes an alliance is anticompetitive in whole or part, it may seek to block the agreement in federal court. Alternatively, DOJ may negotiate a consent decree that would restructure the transaction to eliminate the competitive harm. DOJ has been reviewing the Northwest-Continental alliance proposal, which was announced in January 1998. In May 1998, DOJ indicated that it also is looking at the other two alliance proposals. DOT has stated that, later this year, it also intends to study the proposed alliances under its broader authority to maintain airline competition and protect against industry concentration and excessive market domination, as well as its specific authority to prohibit unfair methods of competition in the airline industry. It will coordinate with DOJ on the alliance reviews. DOT does not have prior approval authority over an alliance. On the basis of a recommendation from an administrative law judge, DOT could issue a cease-and-desist order. Alliances could benefit consumers by increasing the number of destinations and the frequency of flights available through each partner. The airlines believe that these increases will in turn attract new passengers, allowing them to offer more frequent flights, and, if demand is substantial, more new destinations. In an alliance that includes code-sharing, such as those proposed by United and Delta and Northwest and Continental, airline route networks are effectively joined, expanding possible routings by linking two different hub-and-spoke systems. The service provided through code-sharing replicates the “seamless” travel that would be provided by a single airline, known as “on-line service.” This type of service is generally preferred by airline passengers because it allows the convenience of single ticketing and check-in. Airlines have had interline agreements, which offer many of the same services, for some time. Interline agreements provide for the mutual acceptance by the participating airlines of passenger tickets, baggage checks, and cargo waybills, as well as establish uniform procedures in these areas. However, with on-line service, connecting flights between the two code-sharing airlines are shown in the computer reservation system as occurring on one airline. Officials for the airlines see advantages to on-line service for their customers. For example, with on-line service under the alliance proposed by United and Delta, airline passengers would be able to travel from Sioux Falls, South Dakota, to Bangor, Maine, on one airline’s code, even though neither airline currently serves the entire route between these two cities. In this example, a passenger could purchase a ticket from Delta and fly on a United plane from Sioux Falls to Chicago, then to Boston, and then, on a Delta flight, to Bangor. The passenger would earn Delta frequent flier miles for the entire trip. According to Northwest and Continental executives, their alliance would result in more than 2,000 new destinations that each airline could begin marketing as its own. The American-US Airways alliance plans to initially offer only limited code-sharing on regional airline flights, and not on each partner’s flights. In addition to new destinations, combining airlines’ hub-and-spoke route networks would also result in a substantial increase in the number of flight options that each airline could offer travelers to existing destinations. Airlines contend that these expanded service options may also attract new passengers, which would then allow the airlines to offer even more frequent flights and, if demand is substantial, more new destinations. Airline officials also note that additional routing options can create some better on-line connections by substituting one airline’s connection for its partner’s when the partner has closer connection times for the customer. This could reduce travel time for some travelers. However, this benefit may be limited. For example, through the proposed alliance, Northwest and Continental officials predict shorter travel times for about 250,000 passengers, or 0.3 percent of the 81.3 million passengers potentially affected in 1997. Critics of code-sharing point out that the practice is inherently deceptive because consumers may believe they are flying on one airline only to discover that they are on another airline’s flight and because code-sharing does not necessarily expand consumer choice. These critics charge that airlines take advantage of consumers’ preferences for on-line connections by making an interline code-share connection appear in computer reservation systems to be an on-line connection. Code-share flights also have the advantage of being listed more than once on computer reservation systems. For example, in our examination of flight listings for 17 international city-pairs, we found that 19 percent of the time code-share flights were listed at least three times (once under each airline and another as an interline connection) on the first screen of the display, giving the partners a competitive advantage over other airlines operating on those routes. Even the former chairman of American Airlines and the current chairman of US Airways are reported as calling code-sharing deceptive for consumers, but have said that they will also propose a code-sharing alliance as a competitive response if the other alliances are approved. In addition to the anticipated benefits of code-sharing, all three of the proposed alliances would offer their passengers reciprocal frequent flier benefits—that is, earning and using frequent flier points on either alliance partner—and the reciprocal use of club facilities. Airline officials believe that these reciprocal benefits would increase the value of frequent flier programs by allowing consumers to pool their points and choose from more destinations and frequencies. One critic counters, however, that unless the airlines substantially increase the number of seats available for use by frequent fliers, the additional demand created by combining the programs will reduce the availability of seats and therefore the value of the frequent flier programs. While the proposed domestic alliances may benefit consumers, they also have the potential to decrease competition in dozens of nonstop markets and hundreds more one-and multiple-stop markets because, even though the alliances are not mergers, they may reduce the incentive for alliance partners to compete with each other. Many longer routes that include one or more stops are currently the most competitive because they offer the greatest number of airlines from which consumers can choose. These same routes are likely to see the largest reduction in choices among totally unaffiliated airlines and, correspondingly, the greatest potential loss in competition. Our prior work on mergers in the 1980s showed that when such competition declines, airfares tend to increase. Unlike international alliances, which largely extend domestic airlines’ route networks into areas that they could not enter by themselves, the networks of the domestic airlines generally overlap to a much greater extent, and therefore the proposed alliances pose a greater threat to competition. Because travel to and from small and medium-sized cities usually involves a stop at one or more hubs, travelers to and from these cities potentially face reduced competition and higher fares. Existing operating barriers, such as constraints on the number of available takeoff and landing slots, are likely to make any increases in concentration problematic because such barriers reduce the likelihood that other airlines will be able to enter the market and provide a competitive response. The proposed alliances could harm consumers because they may reduce the incentive for alliance partners to compete with each other. If this were to happen, airfares would likely increase and service would likely decrease. We analyzed 1997 data on the 5,000 busiest domestic airport-pair origin and destination markets—markets for air travel between two airports—to determine how these markets could be affected by the proposed alliances. If the airlines do not continue to compete on prices, we found that the number of independent airlines could decline in 1,836 of these 5,000 markets, possibly affecting the fares paid by nearly 101 million passengers out of a total of 396 million passengers. For example, the number of effective competitors between Detroit Metro Wayne County Airport and Newark International Airport would decline from two to one if Northwest and Continental do not compete with each other. In 1997, this reduction in competition would have affected the roughly 429,000 passengers who traveled on that route. While the airlines have said that their alliances have relatively few nonstop routes that overlap, these routes often serve many passengers. For example, even though the proposed alliance between United and Delta has only 34 nonstop routes that overlap, the two airlines carry about 9.7 million passengers per year on these routes. Moreover, we believe that it is important to focus on the alliances’ potential harm to competition in the hundreds of additional one-stop and two-stop markets that have overlapping routes. These routes account for most of the 1,836 markets that could be negatively affected by the proposed alliances. In our prior work on the TWA-Ozark merger, we found that after the merger, the total number of cities with direct service declined and competition decreased in many markets. The number of routes served by two or more airlines fell by 44 percent, and fares increased between 7 and 12 percent in constant dollars within 1 year. To the extent that the proposed alliances tend to behave as a single entity, similar results could occur. In contrast to this potential for harm to consumers, competition could increase in 338 of the 5,000 largest markets, affecting about 30 million passengers per year, according to our analysis of 1997 data. In these markets, two alliance partners that individually have a market share of less than 5 percent would combine to form a potentially more effective competitor against other airlines on these routes. However, the number of markets where this could occur is substantially less, and they serve substantially fewer passengers, than the markets where consumers could be harmed by the proposed alliances. Table 2 summarizes the market and passenger information for the proposed alliances. In our prior work, we stated that some international alliances may bring benefits to passengers because international and domestic airlines are able to extend their networks. However, domestic alliances are more likely than international alliances to cause concerns about competition because they often have many more overlapping routes. In a typical international alliance, a domestic airline with a domestic route network will form an alliance with a foreign airline that has a route network in its home territory. These alliances frequently contain only a few routes where the networks overlap on either a nonstop or a one-stop basis. As a result, these alliances can benefit consumers by extending the route structure for both airlines without posing a threat to competition on overlapping routes. For example, prior to the alliance between Northwest Airlines and KLM, those airlines had only two nonstop routes that overlapped, and because neither airline had a route network in the home territory of the other, there was no significant overlap of one-stop routes. In contrast, domestic airlines’ route networks tend to overlap much more. As a result, domestic alliances are potentially more harmful to consumers because competition could decline on many more routes. Service to and from small and medium-sized cities may also be harmed because the number of competing airlines would likely decline in many cases. Most routes to and from these cities involve changing planes at one or more hubs. The number of effective competitors may decline in these markets when such passengers have more than one choice of hub airports. For example, currently, four airlines travel between Appleton, Wisconsin, and Reagan Washington National Airport. Two of those airlines are Delta and United. If these airlines were to compete less because of their alliance, passengers traveling between these two cities could be harmed. Barriers that restrict entry at key airports may increase the potential for harm from the proposed alliances because they remove the threat that high fares or poor service will attract competition from established or new entrant airlines. As we have reported in the past, barriers such as slot controls—limits on the number of takeoffs and landings—at four airports in Chicago, New York, and Washington, D.C., and long-term exclusive-use gate leases at six additional airports have led to higher fares on routes to and from these airports. Such barriers make entry at those airports difficult because the incumbent airlines frequently control access to the airport’s gates. Nonincumbent airlines generally would have to sublease gates from the incumbent airline, often at less preferable times and at a higher cost than the incumbent pays. At two of the four slot-controlled airports—New York’s LaGuardia and Washington’s Reagan National—the levels of concentration by the existing dominant airline would increase substantially following the alliance. The increase at Chicago’s O’Hare and New York’s Kennedy, on the other hand, would be much more modest. Similarly, with the six airports that are gate-constrained, because the dominant airlines already control such large percentages of the available gates, the increases in concentration that would occur following the alliances are also relatively small, averaging less than 2 percent. (See table 3.) To the extent that there is an increased concentration of slots and gates, entry may become more difficult, which would further limit competition on routes to and from these airports and likely lead to higher airfares. Our previous work has shown that airlines that dominate traffic at an airport generally charge higher fares than they do at airports that they do not dominate. We have also reported that several airlines’ sales and marketing practices may make competitive entry more difficult for other airlines. Practices such as airlines’ frequent flier plans and special travel agent bonuses for booking traffic on an incumbent airline encourage travelers to choose one airline over another on the basis of factors other than the best fares. Such practices may be most important if an airline is already dominant in a given market or markets. Together, operating and marketing barriers increase the likelihood that increases in concentration will harm consumers by discouraging entry by other established or new entrant airlines, thus allowing airlines to raise their fares or reduce services. Many dimensions of each of the proposed alliances deserve close scrutiny so that decisionmakers can assess whether the potential benefits of each particular alliance outweigh its potential harmful effects. Though not an exhaustive list, we believe analysis of several key issues will help determine the extent to which each of the proposed alliances may be beneficial or detrimental, overall, to consumers. These key issues are how substantial the benefits to consumers may be, whether incentives to compete are retained, what the potential impact of the proposed alliances on certain classes of consumers and certain communities are, how international travel may be affected, and what the overall implications of the proposed alliances for competition may be. First, DOJ and DOT need to scrutinize each alliance’s claims about the benefits each brings to the public, including the underlying assumptions that each alliance is using to estimate consumer benefits. Some of the estimated increases for the growth in traffic may depend on questionable assumptions about how much new traffic can be generated by marginal additions in the frequency of flights and the number of destinations or about how many additional travelers will choose to fly to destinations through a code-sharing arrangement that is currently available through an interline connection. In addition, DOT and DOJ need to assess the competitive response by other airlines or other alliances to determine how much new traffic may be generated rather than how much passengers shift from one airline or alliance to another. Second, it is important for decisionmakers to examine the issue of whether each alliance’s partners will continue to compete with one another on price. The amount of competition may vary by alliance. Officials with United, Delta, Northwest, and Continental told us that, because the airlines will remain separate companies, they expect to set prices independently and thus compete for each passenger. The three alliances have not specifically explained their financial arrangements or how they will ensure that price competition will be preserved. If the six airlines do compete vigorously on pricing, then this competition may alleviate many of the concerns about whether consumers would be harmed by dominant airlines in particular markets using their monopoly power to raise fares. On the other hand, if the alliances reduce incentives to compete on prices, then DOJ and DOT will need to carefully examine the overlap in the alliance partners’ route structures and assess whether an alliance would create a significant number of routes with less, or no, competition. Determining the incentives will, at a minimum, likely require a review of the exact terms of the alliances’ agreements, which may be contained in proprietary documents that DOJ and DOT have access to. We also believe that a number of other issues will be important for DOT and DOJ to analyze in their reviews of these proposed alliances. These include the following: The potential impact of the proposed alliances on certain classes of consumers and certain communities. Some business travelers have recently complained about fare increases, and consumers from some small and medium-sized communities have not experienced the lower fares and/or improved services that deregulation has delivered to other parts of the country. It will be important for policymakers to determine whether these alliances could exacerbate or ameliorate these fare and/or service problems. The impact each alliance could have on consumers who travel internationally. Both of the code-sharing alliances have indicated that eventually they would like to include their international partners, thereby allowing them to offer improved service to international destinations through such benefits as new service, increased flight frequency, and better connections. International code-sharing alliances are a way of opening foreign markets to U.S. airlines that otherwise would not be able to serve these markets because of restrictions in the bilateral agreements that govern service between countries. Northwest, United, and Delta have international strategic alliances that not only feature code-sharing and other types of integration but that also have immunity from U.S. antitrust laws. This immunity has been granted in the framework of Open Skies agreements, whereby all bilateral restrictions are eliminated. We have found that partners in these strategic code-sharing agreements have had increased traffic and revenues, and that passengers benefit through decreased layover times. However, we also have found that insufficient data exist to determine whether consumers are paying higher or lower fares as a result of the alliances and what effect the alliances will have on competition and fares in the long term. Given the increasing size and scope of the alliances’ international reach, the questions we raised in our earlier report about the alliances’ effect on fares and competition could become even more urgent. The potential sources of new competition if any combination, or all, of the alliances move forward. As we mentioned earlier, the three alliances would represent about 70 percent of the domestic aviation industry. Other industries, such as automobiles, have been similarly dominated by a few firms. That industry was widely regarded as not being competitive until new sources of competition emerged from outside the domestic industry. As we noted in our previous work, new airlines may be at a disadvantage in competing with the large alliances because of the incumbents’ large route networks and other barriers resulting from their marketing practices and slot and gate constraints at major U.S. airports. Should any combination, or all three, of the alliances go forward, there may be considerable uncertainty about the ability of new airlines to compete in many markets. The same may hold true for existing U.S. airlines that lack alliance partners, whether they are older, established airlines, such as Trans World Airlines, or new entrant airlines, like Frontier. Mr. Chairman, this concludes my prepared statement. Our work was conducted in accordance with generally accepted government auditing standards. To provide data for this testimony, we contracted with Data Base Products, Inc. Data Base Products, Inc., used information submitted by all U.S. airlines to DOT for 1997 and produced various tables to our specifications. Data Base Products, Inc., makes certain adjustments to these data to correct for deficiencies, such as those noted by the DOT’s Office of the Inspector General. We did not review the company’s specific programming but did discuss with company officials the adjustments that they make. We also interviewed officials with DOT, DOJ, and each of the six major airlines contemplating domestic alliances. We would be pleased to respond to any questions that you or any Member of the Subcommittee may have. Domestic Aviation: Service Problems and Limited Competition Continue in Some Markets (GAO/T-RCED-98-176, Apr. 23, 1998). Aviation Competition: International Aviation Alliances and the Influence of Airline Marketing Practices (GAO/T-RCED-98-131, Mar. 19. 1998). Airline Competition: Barriers to Entry Continue in Some Domestic Markets (GAO/T-RCED-98-112, Mar. 5, 1998). Domestic Aviation: Barriers Continue to Limit Competition (GAO/T-RCED-98-32, Oct. 28, 1997). Airline Deregulation: Addressing the Air Service Problems of Some Communities (GAO/T-RCED-97-187, June 25, 1997). International Aviation: Competition Issues in the U.S.-U.K. Market (GAO/T-RCED-97-103, June 4, 1997). Domestic Aviation: Barriers to Entry Continue to Limit Benefits of Airline Deregulation (GAO/T-RCED-97-120, May 13, 1997). Airline Deregulation: Barriers to Entry Continue to Limit Competition in Several Key Domestic Markets (GAO/RCED-97-4, Oct. 18, 1996). Domestic Aviation: Changes in Airfares, Service, and Safety Since Airline Deregulation (GAO/T-RCED-96-126, Apr. 25, 1996). Airline Deregulation: Changes in Airfares, Service, and Safety at Small, Medium-Sized, and Large Communities (GAO/RCED-96-79, Apr. 19, 1996). International Aviation: Airline Alliances Produce Benefits, but Effect on Competition Is Uncertain (GAO/RCED-95-99, Apr. 6, 1995). Airline Competition: Higher Fares and Less Competition Continue at Concentrated Airports (GAO/RCED-93-171, July 15, 1993). Computer Reservation Systems: Action Needed to Better Monitor the CRS Industry and Eliminate CRS Biases (GAO/RCED-92-130, Mar. 20, 1992). Airline Competition: Effects of Airline Market Concentration and Barriers to Entry on Airfares (GAO/RCED-91-101, Apr. 26, 1991). Airline Competition: Industry Operating and Marketing Practices Limit Market Entry (GAO/RCED-90-147, Aug. 29, 1990). Airline Competition: Higher Fares and Reduced Competition at Concentrated Airports (GAO/RCED-90-102, July 11, 1990). Airline Deregulation: Barriers to Competition in the Airline Industry (GAO/T-RCED-89-65, Sept. 20, 1989). Airline Competition: Fare and Service Changes at St. Louis Since the TWA-Ozark Merger (GAO/RCED-88-217BR, Sept. 21, 1988). Competition in the Airline Computerized Reservation Systems (GAO/T-RCED-88-62, Sept. 14, 1988). Airline Competition: Impact of Computerized Reservation Systems (GAO/RCED-86-74, May 9, 1986). Airline Takeoff and Landing Slots: Department of Transportation’s Slot Allocation Rule (GAO/RCED-86-92, Jan. 31, 1986). Deregulation: Increased Competition Is Making Airlines More Efficient and Responsive to Consumers (GAO/RCED-86-26, Nov. 6, 1985). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the potential impact of the alliances proposed by the nation's six largest airlines, focusing on the competitive implications of the proposed alliances, including: (1) their potential benefits to consumers; (2) their potential harm to consumers; and (3) the issues that policymakers need to consider in evaluating the net effects of the proposed alliances. GAO noted that: (1) the primary potential benefits of the proposed alliances for consumers, according to airline officials, are the additional destinations and frequencies that occur when alliance partners join route networks by code-sharing; (2) with code-sharing, an airline can market its alliance partner's flights as its own and, without adding any planes, increase the number of destinations and the frequency of the flights it can offer; (3) airline officials also predict that increased frequencies and connection opportunities will spur additional demand, allowing for even more frequent flights and additional destinations; (4) the primary source of potential harm to consumers from the proposed alliances is the possibility that they will reduce competition on hundreds of domestic routes if the alliance partners do not compete with each other or compete less vigorously than they did when they were unaffiliated; (5) GAO analyzed 1997 data on the 5,000 busiest domestic airport-pair origin and destination markets--markets for air travel between two airports--to determine how these markets could be affected by the proposed alliances; (6) if all three alliances occur, GAO found that the number of independent airlines could decline on 1,836 of the 5,000 most frequently traveled domestic airline routes and potentially reduce competition for about 100 million of the 396 million domestic passengers per year; (7) in weighing the net effects of the proposed alliances, policymakers in the Department of Justice and the Department of Transportation have a difficult task because each alliance varies in its level of integration and in the scope and breadth of the combined networks; (8) however, GAO believes that if several key issues are addressed, policymakers will be better able to determine whether an alliance benefits consumers overall; (9) the first issue is whether airline partners' assumptions concerning the additional traffic and other benefits generated by the alliance are realistic; (10) second, it will be critical to determine if an alliance retains or reduces incentives for alliance partners to compete on price; and (11) if an alliance agreement reduces the incentives for partners to compete with fares in markets they both serve, then policymakers may want to examine the overlap in the alliance partners' route structures to determine whether that alliance would lead to a significant number of routes with fewer independent airlines. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Mexico’s accession to the General Agreement on Tariffs and Trade (GATT) in 1986 initiated a process of market liberalization that provided significant opportunities for U.S. agricultural exports. By the early 1990s, Mexico had become the fastest growing export market for U.S. agricultural products, and the United States enjoyed a substantial net agricultural trade surplus with Mexico. U.S. agricultural producer groups were generally supportive when the United States and Mexico entered into negotiations aimed at creating a free trade agreement, which eventually resulted in NAFTA. In negotiating NAFTA, the United States sought to gain additional market access for its agricultural exports to Mexico by eliminating Mexican agricultural tariffs. Mexico’s agricultural tariffs averaged 10 percent, compared to average U.S. tariffs of 4.5 percent at the time NAFTA was being negotiated. NAFTA called for Mexico to eliminate tariffs on most commodities immediately upon implementation of the agreement in 1994 and to do away with nontariff trade barriers, most notably its system of import licensing requirements. Some products that Mexico considered to be particularly sensitive commodities were granted transition periods for tariff elimination to allow time for Mexican producers to adjust to increased import competition. NAFTA sets forth the specific schedules for tariff elimination and places commodities in staging categories, or “baskets,” that define when the commodities should enter the market duty-free. In general, tariffs for products that were granted transition periods were reduced in equal increments over a specified time period (see table 1). However, for certain sensitive commodities (such as corn and poultry) the greater part of tariff reductions was postponed until the final years of the transition period, a practice referred to as “back-loading.” NAFTA also called for Mexico and the other NAFTA partners to replace quantitative import restrictions with tariff rate quotas (TRQs). Products subject to TRQs enter the importing market duty-free up to the level of the quota. Once the duty-free level (quantitative limit) is reached, a duty is imposed on the over-quota imports. NAFTA partner countries committed to gradually expanding the duty-free quota for the commodities, reducing the over-quota tariff charged during the transition period, and ultimately eliminating the TRQs. As with the phasing out of tariffs, NAFTA TRQs follow the same scheduled transition periods of 4, 9, and 14 years. In addition to providing for the elimination of tariff and nontariff trade barriers, NAFTA also established disciplines for the application of trade measures to counter threats or harm to domestic producers and consumers, such as sanitary and phytosanitary (SPS) requirements, antidumping and countervailing duties, and safeguard actions. For example, NAFTA requires that SPS measures must be science-based, nondiscriminatory, and transparent, and that they are applied only to the extent necessary to achieve a party’s appropriate level of protection. Similarly, under NAFTA the parties are required to follow their domestic legal procedures when applying antidumping or countervailing duties measures in response to unfair foreign trade practices. NAFTA also calls for safeguards to be applied through fair and open administrative procedures and for compensation to be provided for the affected countries. Under NAFTA, a party’s right to apply a safeguard terminates at the end of an agreed-upon transition period. Thereafter, a party may apply the safeguard only with the consent of the exporting party. Moreover, NAFTA allows the party applying a safeguard to impose duties only up to the level of its Most Favored Nation duties. Many studies projected that Mexico would benefit from improved access to U.S. agricultural markets for agricultural products under NAFTA. However, some observers raised concerns about the difficulties Mexico’s more traditional agricultural producers might encounter as the country opened up to U.S. products. With more than 22 percent of the population dependent on the sector, but with many farmers unable to compete under free market conditions, agriculture is a significant yet vulnerable area of the Mexican economy. Differences in perceived opportunities and challenges resulted from the three distinct types of agricultural producers present in Mexico. Mexico’s agriculture sector consists of a large number of small traditional farmers, some medium size commercially oriented growers, and a lesser number of large modern producers. These groups of farmers differ in many respects including farm size, access to capital, types of crops produced, and productivity. Small subsistence farmers produce primarily corn (maize), often at subsistence levels for self-consumption, in small parcels of less than 5 hectares of mostly rain-fed land. Corn is also among the major U.S. agricultural exports to Mexico, which is perceived by some to be in competition with the production of small subsistence farmers. Medium size farmers are involved in commercial-oriented operations, however, they face relatively high cost structures, which are marked by scarcity of capital and insufficiently developed marketing infrastructure. Some believe that medium size commercial farmers face the greatest impact from import competition and structural change. On the other hand, Mexico’s large commercial farmers usually have larger plots of irrigated land and a higher productivity level. They have better access to capital, including direct investment and commercial lending from abroad. Mexican commercial farmers are also typically involved in production of higher-valued commodities, notably fresh fruits and vegetables, which have undergone dynamic export growth since the early 1990s. Agricultural trade expansion since NAFTA’s implementation generally has been consistent with expectations. While U.S. trade data indicates Mexican agricultural exports have done well under the agreement, some observers maintain NAFTA has had negative consequences for small farmers. For example, one study asserts that employment opportunities for Mexican subsistence farmers have declined under NAFTA. According to this study, imports of cheaper corn have contributed to lower corn prices in Mexico, which has led medium size farms to cut back their demand for labor supplied by subsistence farmers. However, a December 2003 World Bank report noted that NAFTA did not bring about many of the anticipated negative effects on poor subsistence farmers and had not had a devastating effect on Mexican agriculture as a whole. This research notes that as consumers, Mexican farmers may have benefited from lower corn prices. In addition, corn production in Mexico has not declined, but rather had increased by about 14 percent since NAFTA was enacted, to a record high in 2003. Other research conducted by several Mexican academic institutions concluded that NAFTA had resulted in benefits for the country’s farm sector, including increased agricultural exports and greater investment in agricultural production. As implementation of NAFTA has progressed over the past decade, Mexico has phased out tariffs on agricultural imports in accordance with the agreement’s scheduled transition periods of 4, 9, and 14 years and has done away with a key nontariff trade barrier, import licensing requirements. U.S. agricultural exporters have benefited both from this process of continued trade liberalization under NAFTA and from the additional assurances provided through the NAFTA dispute settlement mechanism. Exports to Mexico have increased significantly since NAFTA, continuing a trend of export growth that started in the mid 1980s. However, despite the progress made, some U.S. agricultural products continue to experience difficulties gaining access to the Mexican market, typically due to antidumping, SPS requirements, safeguards, and other trade measures Mexico has put in place. These difficulties are not unlike challenges U.S. agricultural exports face in other major markets, such as Canada or Japan. Although Mexico had taken several steps to allow greater access to its markets prior to 1994, NAFTA provided a legal agreement and framework through which further market liberalization could take place. Further, NAFTA’s dispute settlement mechanism provided U.S. exporters with additional rules and processes for resolving disputes that did not exist prior to NAFTA. Mexico has thus far implemented its NAFTA commitments by reducing or eliminating tariffs according to schedule and removing nontariff barriers, resulting in greater access for U.S. agricultural goods. In the latest round of tariff eliminations (on Jan. 1, 2003), Mexico eliminated tariffs on more than a dozen commodity imports from its NAFTA partners, including products important to U.S. producers such as rice, soy oil, and pork. On January 1, 2003, in accordance with its commitments under NAFTA, Mexico had eliminated tariffs or TRQs on all but three commodities: corn, dry beans, and milk powder.Two of these commodities, corn and beans, are considered particularly sensitive commodities for Mexican agriculture because they are among the principal crops of small Mexican farmers and are also staples of the Mexican diet. TRQs on these commodities are scheduled for full elimination by the end of the 14-year transition period in 2008. In addition, Mexico has done away with import licensing requirements, a key nontariff barrier. These import licensing requirements functioned, in effect, as a type of quota, since only the volume of goods authorized under the import license could be imported, and they were intended to protect Mexican producers of agricultural commodities that were sensitive to foreign competition. Prior to NAFTA, many major U.S. agricultural exports to Mexico, such as poultry, dairy, wheat, corn, and dry beans, were subject to import licensing requirements. NAFTA permitted Mexico to use phased- in tariff elimination as a mechanism to transition away from the use of import licensing requirements. Under the agreement, Mexico immediately did away with import licensing requirements and converted them to either regular tariffs or TRQs. Additionally, NAFTA set a schedule to gradually eliminate both the tariffs and TRQs. NAFTA also benefits U.S. exporters by providing them with a formal mechanism for resolving disputes. Under the agreement, disputes that cannot be resolved through consultations between member countries may be brought before impartial, independent panels. Since both the United States and Mexico are members of the WTO as well as NAFTA, the United States can file trade grievances under the dispute settlement mechanism provided by either agreement. According to United States Trade Representative (USTR) officials, the United States generally would utilize the NAFTA dispute settlement mechanism if it determined that Mexico is in violation of a provision that is specific to NAFTA and is not covered under the WTO. These officials explained that the United States would rely on the WTO’s dispute settlement process if the matter also affected WTO members that are not members of NAFTA. According to information provided by USTR, to date, the United States has only brought one agricultural dispute settlement case against Mexico under NAFTA, compared to four under the WTO process. According to a U.S. Department of Agriculture (USDA) report, most trade disputes are resolved through informal discussions or consultations involving government and private sector representatives, rather than formal dispute settlement procedures. For example, through government- to-industry negotiations, a minimum price agreement was established for U.S. apples, and through government-to-government negotiations, an agreement was reached to modify Mexico’s dry bean quota auctions. In addition, through industry negotiations, a dispute involving U.S. and Mexican grape industry labeling regulation was resolved. The use of industry negotiations also deterred the Mexican cattle industry from filing an antidumping petition against imports of U.S. cattle. Another alternative dispute settlement mechanism is the NAFTA Advisory Committee on Private Commercial Disputes Regarding Agricultural Goods, which recommends less adversarial resolutions to agricultural contract or commercial disputes. Since NAFTA’s implementation, total U.S. agricultural exports to Mexico have nearly doubled, rising from $4.1 billion in 1993—the last year prior to NAFTA’s implementation—to $7.9 billion in 2003 (adjusted for inflation). Between 1993 and 2003, the value of U.S. exports to Mexico grew on average by 17.4 percent annually. By comparison, U.S. agricultural exports to the world grew at an average annual rate of 2.3 percent over the same time period. U.S. exports to Mexico have comprised an increasingly larger share of the United States’ total agricultural exports; Mexico’s share grew from about 8 percent in 1993 to about 13 percent in 2003. Moreover, according to USDA’s export strategy for Mexico, the full implementation of NAFTA, a growing urban population, increasing per capita income, and lack of arable land make Mexico an excellent long-term prospect for U.S. agricultural products. U.S. agricultural exports to Mexico already underwent significant growth after Mexico joined GATT in 1986 and began opening its market to foreign trade. By the early 1990s, Mexico attained its position as the third largest importer of U.S. agricultural products, after Canada and Japan. The overall increases in agricultural exports to Mexico since NAFTA began came about despite the collapse of the Mexican peso in late 1994, which harmed Mexican purchasing power for foreign goods and triggered an economic downturn. Beginning in about 1996, Mexico’s economy began a recovery, and U.S. exports to Mexico expanded accordingly. Not all increases in exports to Mexico can be attributed to NAFTA because factors such as economic growth, weather, exchange rates, domestic supply, and population growth also affect Mexico’s demand for U.S. products. U.S. imports of agricultural products from Mexico have also increased since NAFTA, rising from about $2.9 billion in 1993 to $6.3 billion in 2003 (adjusted for inflation). Agricultural imports from Mexico increased at an average annual rate of 8.5 percent over the same time period. In 2003, agricultural imports from Mexico accounted for about 13 percent of the total value of U.S. agricultural imports from the rest of the world. Figure 1 shows the total value of U.S.–Mexico agricultural trade. Notwithstanding the potential effects of external factors on trade, NAFTA’s impact on U.S. exports, particularly for certain key commodities, generally appears to have been positive. Earlier studies generally concluded that the agreement would increase U.S. export opportunities for grains, oilseeds, dairy products, tree nuts, and meats. Trends in the trade of the largest groupings of U.S. agricultural products have been generally consistent with these predictions. For example, the United States increased exports of animal products, grains and feeds, fruits and vegetables, and oilseeds to Mexico since NAFTA. From NAFTA’s implementation in 1994 until 2003, the value of exports of these key groups of products underwent average annual increases of between 3.2 percent (oilseeds) and 16 percent (grains and feeds) (see fig. 2). Some U.S. agricultural products continue to experience difficulties gaining access to the Mexican market due to the application of nontariff trade measures. Although Mexico removed import licensing requirements, a key nontariff trade barrier prior to NAFTA, it still applies several nontariff measures that affect imports from the United States. According to USDA, the nontariff measures that present the most significant barriers to market access for U.S. agricultural exports have been Mexico’s application of antidumping duties, SPS requirements, and safeguards. In addition to these trade measures, Mexico has put in place a product tax on all beverages containing sweeteners other than sugar, which has basically eliminated the Mexican market for high-fructose corn syrup (HFCS). However, these impediments are not unlike market access challenges experienced by U.S. agricultural exports to other major trade partners, including Canada, Japan, and the European Union. The following section presents information on the key nontariff barriers and examples of U.S. agricultural commodities that have encountered market access challenges in Mexico. The information is based, in part, on our analysis of market access issues related to seven selected agricultural commodities: apples, beef, corn, HFCS, pork, poultry, and rice. Our analysis of each of these commodities is presented in greater detail in appendix II. The use of antidumping duties continues to pose a barrier to U.S. agricultural exports to Mexico. The United States has raised complaints in the WTO regarding Mexico’s application of its antidumping laws on commodities such as hogs, rice, and beef. The United States requested a WTO panel with respect to rice and has argued that Mexico’s imposition of antidumping duties is inconsistent with the WTO Antidumping Agreement. Mexican officials at the Ministry of the Economy (Secretaría de Economía) stated that Mexico’s application of antidumping measures to U.S. agricultural imports was based on an objective and intensive investigation that determined harm. According to representatives from some U.S. producer groups and a former senior Mexican government official, however, there may also be other considerations that affect Mexico’s antidumping decisions. For example, U.S. apple producers question the timing of Mexico’s imposition of antidumping duties on apples in August 2002, only a few months before NAFTA’s tariff rate quota on apples was scheduled to be lifted on January 1, 2003. Additionally, these observers told us that Mexico’s antidumping actions against certain U.S. agricultural imports are, to some extent, a response to U.S. restrictions on Mexican exports to the United States. NAFTA establishes a number of general requirements to ensure that SPS measures are only used to the extent necessary to protect plant, animal, and human health and not as a means to protect domestic producers fromcompetition. As mentioned earlier, NAFTA calls for these measures to be science based, nondiscriminatory, and transparent and requires that the measures be applied only to the extent necessary to achieve an appropriate level of protection. Mexican officials responsible for plant and animal health protection maintain that Mexico’s SPS measures are based on sound science. However, USDA officials and industry group representatives have raised concerns about the legitimacy of some SPS measures imposed by Mexico on U.S. agricultural imports as it eliminates tariffs and tariff-rate quotas. U.S. producer groups told us that they believe Mexico sometimes uses SPS measures as a means to retaliate for U.S. policies against its agricultural exports to the United States. For example, some U.S. producer groups contend that in order to protest U.S. phytosanitary controls on imports of avocados from Mexico, Mexico’s agricultural authorities initiated a new policy against U.S. cherries requiring cherry exports to Mexico to undergo a much more rigorous inspection process at the border than is warranted. As a result, U.S. exports of cherries to Mexico dropped significantly because U.S. exporters wanted to avoid delays at the border that would pose risks with such a perishable commodity. Moreover, the 2004 proposed work-plan of phytosanitary measures was not signed. Table 2 illustrates examples of SPS controversies between the United States and Mexico. U.S. officials explained that SPS measures are the most commonly used nontariff measure affecting U.S. market access and may indeed, at times, be applied to protect domestic producers. According to U.S. and Mexican officials, determining when SPS measures are justified can be difficult for several reasons, including different country standards and different conclusions based on scientific data. Officials from USDA’s Animal and Plant Health Inspection Service (APHIS) and its Mexican counterpart SENASICA (Servicio Nacional de Sanidad, Inocuidad y Calidad Agroalimentaria) informed us that they are working to harmonize U.S. and Mexican SPS standards to minimize disagreements. In addition, they are collaborating to lift Mexico’s ban on imports of citrus from Arizona and areas in Texas due to concerns over fruit fly infestation, as well as to design and implement a more satisfactory inspection process for U.S. apple exports to Mexico. SPS disputes stemming from differing interpretations of scientific data or differences in regulatory standards illustrate the technical complexity of plant and animal health protection regulations and their impact on trade. U.S. officials told us that working through SPS issues with Mexican authorities under NAFTA provided lessons for later negotiations. They explained that as developing countries liberalize their markets and begin to develop mechanisms to address health risks associated with increased agricultural trade, they often need technical assistance. Thus, the United States provided trade capacity building assistance to address SPS issues for some Central American countries and the Dominican Republic in connection with free trade agreement negotiations with those countries. The USDA Unified Export Strategy for Mexico notes that beyond addressing individual SPS issues there must be broader cooperation with Mexico on technical issues, such as the harmonization of standards, equivalency of regulatory processes, and transparency in light of the increasing market integration of the two countries. U.S. government officials and U.S. agricultural producer groups told us that Mexico’s application of certain safeguards to U.S. agricultural products have been a trade nuisance. In the years following NAFTA, Mexico has applied special safeguard agricultural provisions on imports of U.S. live swine, pork, potato products, and fresh apples in the form of TRQs as provided for in NAFTA. Mexico also applied a safeguard under Chapter 8 of NAFTA on certain U.S. poultry products. Specifically, under NAFTA, Mexico’s TRQ on poultry products was to be eliminated on January 1, 2003. However, in late 2002, Mexico’s poultry industry petitioned the Mexican government to impose a safeguard on U.S. chicken leg quarters. The Mexican industry argued that the elimination of Mexico’s TRQ would result in a surge in imports from the United States which would injure Mexican producers. USTR officials said the safeguard on poultry was a unique situation and questioned whether a similar arrangement could be achieved in other industries. For more information on U.S. poultry exports to Mexico, see appendix II. The poultry case also highlights difficulties encountered in the implementation of a safeguard due to trade data discrepancies. The United States and Mexico did not agree on the quantity of U.S. chicken leg quarters that were exported to Mexico in the first half 2003. Mexican data showed a much larger surge than U.S. data. One U.S. official told us that the main reason for the large discrepancy was the way Mexico records its initial import statistics, which is based on notifications of intended imports filed by Mexican importers, rather than actual imports. After the TRQ on poultry expired on January 1, 2003, Mexican importers filed large number of entries, but some never crossed the border. In response to these difficulties, Mexican officials informed us they have taken steps to clear notices of intended imports from their database when imports do not actually occur within a specified time frame. In addition to the trade measures discussed above, Mexico has imposed a tax on beverages made with sweeteners other the sugar, which has led to a strongly contested dispute between the United States and Mexico regarding market access for U.S. HFCS exports. Specifically, in January 2002, the Mexican Congress imposed a 20 percent product tax on soft drinks and other beverages that use any sweetener other than cane sugar. This action meant that Mexico taxes any beverage containing HFCS, no matter the amount of HFCS present, at a rate of 20 percent, in addition to any other taxes already imposed. U.S. importers and producers of HFCS were affected immediately as Mexican beverage manufacturers switched to the use of domestically produced sugar instead of HFCS imported primarily from the United States. Although the tax was temporarily suspended by presidential decision for a 4-month period, Mexico’s Supreme Court of Justice unanimously voted to nullify this decision in July 2002. As a result, the tax was imposed once again. In December 2002, the Mexican Congress voted to extend the tax. In 2004, the United States filed a dispute case in the WTO against Mexico’s product tax on HFCS. The case is still pending resolution. See appendix II for more information on the HFCS case. Since the early 1990s, the Mexican government has enacted several agricultural assistance programs to help farmers adjust to the changes brought by trade liberalization, including NAFTA. Rapid urbanization has also created political urgency to provide low-cost food by promoting greater efficiency in domestic food production. The three main programs had a total budget of over $2 billion in 2003, and their objectives range from income support to improving agricultural productivity. However, deep- seated structural problems, notably tenuous land ownership and lack of rural credit, continue to hinder growth and rural development. Opponents of NAFTA have sought to link lagging rural development and rural poverty in Mexico to growing imports of U.S. agricultural products. They oppose further tariff eliminations as called for under NAFTA and demand a renegotiation of the agricultural provisions of the agreement. This opposition presents challenges to Mexico’s successful transition to liberalized agricultural trade under NAFTA. In response to the changes that market reforms and free trade would bring to its agricultural sector, Mexico enacted various agricultural programs and policies since the early 1990s to help farmers adjust to changing economic conditions. Three of the most significant agricultural assistance programs have been (1) a major cash transfer program, PROCAMPO (Programa de Apoyos Directos al Campo); (2) an investment program, Alianza (Alianza para el Campo); and (3) a marketing support program (Programa de Apoyos Directos al Productor por Excedentes de Comercialización para Reconversión Productiva, Integración de Cadenas Agroalimentarias y Atención a Factores Críticos, formerly Programa de Apoyos a la Comercialización y Desarrollo de Mercados Regionales). Besides these three programs, there are other support programs in rural Mexico, such as Progresa, which was introduced in 1997 to alleviate poverty through monetary and in-kind benefits, as well as to invest in education, health and nutrition. The three major agricultural assistance programs have different levels of budget and distinct objectives. Appendix III provides a detailed description of each program. PROCAMPO is the largest program in terms of annual budget, amounting to over $1.2 billion in 2003. It provides direct payments to oilseeds and grains (including corn) producers on a per-hectare basis. In 2001, it supported 2.7 million producers on 13.4 million hectares. Its objectives are to compensate farmers for expected losses under trade liberalization and the elimination of price subsidies, to make the free trade agreement acceptable to farmers, to alleviate poverty, and to reduce migration from rural areas. Alianza has an annual budget of around $570 million and supports about 2 million farmers. The program provides matching grants to finance productive investments and support services. The overall objective of the program is to improve agricultural productivity by promoting a transition to higher value crops, improving livestock health, facilitating technology transfers, and attracting investment in infrastructure. The marketing support program had an annual budget of about $580 million in 2003 and benefits 240,000 producers. It provides payments to producers of grains and oilseeds in certain areas, usually on a per-ton basis. The Mexican government’s evaluation suggests that the program provides certainty to farmers’ income and is an important factor in mitigating migration from the countryside. Notwithstanding various farm support programs including the ones discussed above, some researchers and Mexican and U.S. government officials noted that Mexico still needs to address structural impediments that hinder rural development. Some of these problems are related to Mexico’s tenuous land ownership, known as the ejido system. Some economists argue that the small size of farm plots under the ejido system does not make for economically viable production units. In addition, the ejido system limits farmers’ ability to obtain credit using land as collateral because the farmers do not have clear ownership of the land. Without access to credit, farmers cannot shift to new technologies and increase productivity. According to experts, the lack of rural credit has been a key impediment to Mexican agricultural development. Mexico’s financial crisis of 1995 exacerbated the problem of rural development by severely limiting the Mexican government’s budget available to carry out programs to invest in rural areas. In addition, according to USDA, other challenges identified by experts that contribute to the lack of rural development include: low education level, poor rural infrastructure, environmental problems related to land use, and low levels of technology. While U.S. officials note that NAFTA has greatly benefited Mexican agriculture overall, they express concern about the challenges posed by lagging rural development to the long-term successful implementation of the agreement. U.S. officials caution that lagging rural development fuels the arguments made by opponents of NAFTA that cheap imports from the United States have depressed Mexican agricultural product prices, hurting small farmers and deepening rural poverty. In its fiscal year 2005 Unified Export Strategy for Mexico, USDA acknowledged the need for efforts to highlight the benefits of NAFTA for Mexico’s economy while seeking ways to help Mexico address its rural development issues. The implementation of NAFTA became a major political issue as Mexico prepared to eliminate tariffs and tariff rate quotas in January 2003. Elimination of these tariffs provided U.S. agricultural exports even greater access to the Mexican market. In order to respond to intense criticism by the opponents of NAFTA at that time, USDA officials had to engage in extensive dialogue with Mexican legislative and executive officials, and they mounted a public information drive to explain the benefits of NAFTA for Mexican agriculture. Ultimately Mexico eliminated the tariffs, but the administration of Mexican President Vicente Fox found it necessary to negotiate a national agreement on agriculture with various domestic constituencies. He intended the agreement—referred to as Acuerdo Nacional para el Campo—to address concerns about perceived negative effects of trade liberalization on Mexico’s rural poor. As part of this agreement, the Mexican government commissioned several Mexican academic institutions to study the impacts of NAFTA on Mexican agriculture. This research generally confirmed that structural problems confronting Mexican agriculture preceded the implementation of NAFTA. However, certain Mexican producer groups continue to pressure the government, and a number of members of Mexico’s Congress have strong ties to groups that oppose NAFTA. U.S. and Mexican government officials and agricultural experts warned that there may be considerable opposition to the next round of tariff elimination in 2008. These officials cited the experience in the months leading up to the latest round of agricultural tariff elimination in 2003. In addition, they note that corn, one of the three remaining commodities scheduled to have tariffs lifted in 2008, is a commodity of particular concern in Mexico. Corn cultivation has ancient roots in Mexican rural culture; is central to the Mexican diet, accounting for about one-third of total calories; and remains the principal crop of subsistence farmers. For these reasons, eliminating tariffs on corn will be a sensitive cultural issue, as well as a matter of economic concern. Certain farm groups in Mexico have argued that allowing cheap imports of U.S. corn will drive the Mexican agriculture into ruin. Mexican politicians who oppose NAFTA note the continuing economic distress in rural areas of Mexico and insist on renegotiation of the agricultural provisions of the agreement to improve the conditions of Mexican farmers. Although the total elimination of already low Mexican tariffs on corn may not have much economic significance for U.S. producers, failure to comply with the final phase of tariff elimination may undercut support for NAFTA among U.S. producers who were in favor of the agreement with the expectation that it would lead to genuinely free trade. Additionally, U.S. trade officials have expressed serious reservations about any attempt to renegotiate the agricultural provisions of NAFTA, because it could lead to demands to renegotiate other aspects of the agreement and undermine the agreement as a model for trade liberalization throughout the Western Hemisphere. Over the last 10 years, U.S. agencies, primarily led by USDA, have carried out numerous activities that benefit both U.S. and Mexican agricultural interests. However, these activities have not been intended to address the challenges presented by lagging rural development to Mexico’s transition to liberalized trade under NAFTA. While the United States provides technical assistance to more recent free trade partners to facilitate their adjustment to trade liberalization, no such assistance was arranged for Mexico under NAFTA. Nevertheless, since 2001 the United States has supported collaborative efforts to promote economic development in the parts of Mexico where growth has lagged under the Partnership for Prosperity (P4P) initiative. Officials from both countries are working on a broader approach to Mexican rural development under the initiative, but they recognize that much still needs to be done in this area. In an effort to support rural development through P4P, the United States has provided some limited technical assistance to the Mexican government’s new rural lending institution. Recognizing the importance of rural development to the successful implementation of NAFTA, State Department and USDA strategies for Mexico call for building on collaborative activities under P4P to pursue the related goals of rural development and trade liberalization under NAFTA; however, the P4P action plans do not set forth specific strategies and activities that could be used to achieve these goals. Historically, U.S. agencies have undertaken numerous collaborative agricultural efforts of mutual interest with their Mexican counterparts; however, the agencies have not intended those efforts to address the challenges presented by lagging rural development. USDA, in conjunction with its Mexican counterparts, has led most of these efforts as part of its traditional mission of supporting U.S. agricultural production and exports. With the exception of pest eradication efforts sponsored by the Animal and Plant Health Inspection Service (APHIS)—approximately $280 million over the past 10 years—all USDA activities have involved modest funding of less than $8 million combined since NAFTA was implemented. Some U.S. agencies have been involved in collaborative efforts with Mexico in pursuit of plant, animal, and human health objectives. USDA’s APHIS and Food Safety and Inspection Service and the Food and Drug Administration have implemented several programs in Mexico to protect U.S. agriculture and consumers while also facilitating the export of Mexican agricultural products. For example, APHIS programs are working with the Mexican government and growers to eradicate the Mediterranean fruit fly. Eradicating the fruit fly is of great interest for U.S. fruit farmers. However, eliminating the fly would also allow Mexican farmers to eventually export fruit crops from formerly infested areas. Over the past 10 years APHIS has used almost all of its funds in Mexico for collaborative projects to finance various pest eradication efforts. USDA’s research, data collection, and marketing agencies, such as the Economic Research Service (ERS), National Agricultural Statistics Service, and Agricultural Marketing Service, have worked with their Mexican counterparts to enhance Mexico’s capacity to collect, analyze, and disseminate agricultural information. According to ERS officials, these efforts have improved and facilitated agricultural trade transactions through the Emerging Markets Program. Economic Research Service officials said that while the focus of the Emerging Markets Program is to improve Mexico’s data gathering and reporting systems, USDA has also benefited from Mexico’s improved capabilities because having reliable information facilitates public and private decision making for both the United States and Mexico. The Agriculture Research Service and the International Cooperation and Development area of USDA’s Foreign Agriculture Service have participated in extensive scientific and academic research to improve Mexico’s agricultural production. According to the Agriculture Research Service, there are several concerns over agricultural trade, including food safety, use and consumption of transgenic products, and control of plant and animal pests and diseases. For a list and description of collaborative activities with Mexico implemented by USDA agencies, see appendix IV. While the United Sates has provided technical assistance and support to more recent free trade partners through trade capacity building (TCB), no such assistance was arranged for Mexico when NAFTA was concluded in 1994. TCB became an element of U.S. trade policy after it was introduced under the WTO Doha Development Agenda in 2001. While it was recognized that some agricultural sectors in Mexico would find it challenging to adjust to free market conditions when NAFTA was being negotiated, the agreement did not require that Mexico should receive any assistance to facilitate the transition of its farmers to a more open market. One senior Mexican government official noted that in hindsight TCB or some type of assistance like it would have been beneficial as Mexico entered into a free trade environment with two very strong economies (the United States and Canada). However, this official stressed that Mexico has done very well under NAFTA overall, although small farmers have not typically benefited from economic opportunities provided by the agreement. Even though the United States does not have a comprehensive effort to provide TCB assistance to Mexico, some U.S. agencies have undertaken limited activities in Mexico, which they have characterized as TCB. In 2001, U.S. President George W. Bush and Mexican President Vicente Fox launched the P4P initiative, a new model for bilateral cooperation involving a public–private approach to collaborative development efforts. This new initiative is aimed at assisting those economically depressed regions of Mexico that are the primary sources of migration. These areas tend to be rural regions in Mexico. While P4P seeks to create a new model for collaborating on economic development in Mexico, officials from both countries recognize that few activities have been implemented under P4P that directly affect poor rural areas and that much still needs to be done in the area of rural development. P4P seeks to create a public–private alliance and develop a new model for U.S.–Mexican bilateral collaboration to promote development, particularly in regions of Mexico where economic growth has lagged and has fueled migration. No new funds were specifically allocated to P4P by either government; instead, the U.S. government sought to refocus resources already devoted to Mexico to create a more efficient collaborative network. According to State Department and USDA officials, since its establishment, P4P has become the umbrella for bilateral development collaboration and providing a broader approach to Mexico’s rural development needs that includes occupational and economic alternatives for people in the countryside. While this broader approach to rural development has been embraced by both the United States and Mexico, few activities have been implemented under P4P that directly affect poor rural areas. At the most recent P4P conference in Guadalajara, Mexico, a high-level State Department official responsible for P4P noted that many rural areas throughout central and southern Mexico have not yet been touched by P4P. Similarly, Mexican government officials commented that even though the P4P concept holds much promise, only a few new activities have been undertaken in rural development. For example, Mexican government officials told us and U.S. government documents confirm that approximately $10 million allocated for USAID rural development activities in Mexico under P4P have not yet been used to fund any new projects. Nevertheless, since the initiation of P4P, there have been several first-time achievements that benefit Mexico’s overall economic development. For example, under an arrangement worked out by the U.S. and Mexican government in cooperation with private sector financial institutions, the cost of remittances from the United States to Mexico has dropped by more than 50 percent over the last 3 years. Remittances from Mexican laborers living in the United States reached a record $16.6 billion in 2004. In addition, in 2003 a bilateral agreement was reached through P4P to allow the U.S. Overseas Private Investment Corporation (OPIC) to operate in Mexico for the first time. The agency’s mission is to help U.S. businesses invest overseas to foster economic development in new and emerging markets. According to OPIC officials, for over 30 years there had been resistance by the Mexican government to allow the agency to operate in Mexico because of concerns over sovereignty. Since the bilateral agreement was signed, the OPIC has provided financing to five projects in Mexico, including one related to agriculture. For a description of this and other activities related to rural development by U.S. agencies under P4P, see appendix V. One of the few P4P activities to target rural communities is the U.S. technical assistance provided to the Mexican government’s new rural lending institution, Financiera Rural. Financiera Rural supports agricultural and other economic activities in Mexico’s rural sector with the goal of raising productivity as well as improving the standard of living of rural populations by facilitating access to credit. Through the USDA Cochran Fellowship Program, several Financiera Rural officials were trained in the United States on how to operate a rural credit program. These officials will serve as trainers for credit managers for Financiera Rural. In addition, through a USAID fellowship, USDA arranged for a U.S. expert to assist Financiera Rural in developing a strategic plan. This strategic plan calls for the development of rural financial lending intermediaries in Mexico, which will be fostered using a model that complies with Mexico’s legal framework, determined by a study to be conducted jointly by the Financiera Rural and the Inter-American Development Bank. The new strategic plan also proposes that Financiera Rural fund any productive endeavor in the countryside, not only agricultural production. Activities could include eco-tourism, rural gas stations, transportation services, and so on. According to senior Financiera Rural officials, U.S. technical assistance under P4P has been instrumental in helping them roll out their rural credit program. Financiera Rural officials told us that while the assistance they have received under P4P has had a positive impact, it has been limited. They said that Financiera Rural faces a great challenge in efforts to address limited credit availability in the countryside, which, as noted earlier in this report, is a key factor in Mexico’s lagging rural development. In order to be able to establish an effective rural lending system for small and medium size farmers in Mexico, these officials explained that they need to shift from primarily short-term to long-term credit, develop a network of regional and local intermediary lending institutions, and provide financing for alternative rural economic activities beyond direct agricultural production. Mexican and U.S. officials told us that in order to accomplish these goals Financiera Rural needs to develop expertise in a number of areas, such as risk assessment, project management, and loan evaluation. These officials stated that the expertise in the field of rural credit that exists in the United States would be helpful in ensuring that Financiera Rural is successful in providing credit to small farmers and other entrepreneurs in the Mexican countryside. P4P offers an avenue for the United Sates to provide technical assistance and support to Mexico similar to what it has provided to more recent free trade partners through TCB, according to a senior USDA official. Similarly, Mexican officials said P4P provides the opportunity to make technical assistance available in areas such as rural development, which have not yet benefited from NAFTA. Recognizing the importance of rural development to the full and successful implementation of NAFTA, the State Department’s Mission Performance Plan and USDA’s Unified Export Strategy for Mexico call for building on collaborative activities under the P4P to pursue rural development and support trade liberalization. However, P4P documents generally have little to say about furthering Mexico’s successful transition to liberalized agricultural trade under NAFTA, and P4P action plans do not set forth specific strategies and activities that could be used to advance rural development in support of free trade. The lack of specific plans under P4P to pursue rural development in support of NAFTA is particularly noteworthy because USDA officials expressed concerns that Mexico’s lagging rural development presents a challenge to the successful transition to liberalized trade under NAFTA, including the elimination of remaining tariffs in 2008. USDA officials noted that the underlying factors in Mexico’s lagging rural development are structural and need to be addressed internally by Mexico. Nevertheless, USDA’s Unified Export Strategy for Mexico calls for coordination with the U.S. Agency for International Development to pursue a rural development strategy under the rubric of the P4P initiative. This document also acknowledges the need to continue to underscore the benefits of free trade for Mexico under NAFTA while seeking ways to help Mexico address its rural development issues. USDA officials stressed that it is critical to change the debate from the need for protection from U.S. imports to promoting rural development in Mexico so that small and medium farmers can take advantage of the opportunities provided by free trade. As tariffs and tariff-rate quotas have been reduced or eliminated under the provisions of NAFTA, Mexican authorities have come under pressure to put in place technical barriers to protect producers from perceived harm from growing U.S. imports. Moreover, while Mexico has taken the steps called for under NAFTA to liberalize trade, lagging rural development fuels opposition to further implementation of the agreement. Yet the full and successful implementation of NAFTA is an important factor in assuring market access for United States agricultural exports to Mexico, and it is critical to broader U.S. trade interests because NAFTA is a model for trade liberalization in the Western Hemisphere. While the strategies of U.S. agencies in Mexico see an opportunity to build on the P4P initiative to pursue the related goals of rural development and trade liberalization under NAFTA, P4P documents generally have little to say about NAFTA. More specifically the P4P action plans do not set forth specific strategies and activities that could be used to advance rural development in support of free trade. P4P offers an opportunity for the United States to design a multi-agency comprehensive strategy to address the challenges presented by lagging rural development to Mexico’s transition to liberalized agricultural trade under NAFTA, rather than providing assistance through individual measures. Mexico’s experience adjusting to the challenges of trade liberalization, ranging from difficulties associated with the application of SPS measures, problems raised by trade data discrepancies with the United States, and lagging rural development, illustrate the importance of technical assistance. While Mexico did not seek assistance under NAFTA to adjust to trade liberalization, the U.S. government has acknowledged the usefulness of technical assistance in addressing such challenges by providing TCB assistance in later trade agreements with developing countries. In Mexico, P4P offers an avenue for the United States to provide such technical assistance. A key impediment to Mexican rural development is the lack of credit in the countryside, and the United States with its significant experience in rural lending has the technical expertise Mexico seeks. Moreover, most of Mexico’s structural impediments must be dealt with internally, but facilitating rural credit is one area in which the United States, through P4P, is in a position to collaborate with Mexico. Improving the rural economy through credit facilitation increases the opportunities for Mexican importers of U.S. agricultural commodities and begins to counter negative perceptions of NAFTA’s impact. To aid the full and successful implementation of NAFTA, we recommend that the Secretary of State, as the head of one of the lead agencies for the P4P initiative, work with USDA and other relevant agencies to develop an action plan under P4P laying out specific collaborative efforts on rural development that would support the successful implementation of NAFTA. Such a plan could include a comprehensive strategy that outlines specific activities that are intended to address the challenges presented by lagging rural development to Mexico’s successful transition to liberalized agricultural trade under NAFTA, and sets time frames and performance measures for these activities. To promote rural development in Mexico and enhance Mexican small farmers’ ability to benefit from trade opportunities under NAFTA, which would also help shape a more positive perception of the agreement, we recommend that the Secretary of State, as the lead agency for the P4P initiative, work with USDA and other relevant agencies to expand collaborative efforts with the Mexican government to facilitate credit availability in the countryside. This would include providing Mexico with expertise in the area of rural financing, such as risk assessment, project management, and loan evaluation. We provided a draft of this report to the Department of State, USDA, USTR, USAID, FDA and OPIC for their review. We received formal written comments from the Department of State and from USDA, which are reprinted in appendixes VI and VII, respectively, along with our responses to specific points. In its written comments, the Department of State agreed with the need to develop a P4P action plan on rural development, and noted that on February 17, 2005, the U.S. and Mexican governments agreed to create a new structure under P4P establishing seven permanent working groups, including one on rural development. Each of these working groups has been asked to develop an action plan for 2005 activities. The Department of State also emphasized that the broader goal of P4P is to spur economic growth and development in parts of Mexico that have benefited less from NAFTA (i.e., not limited to rural development) and noted that the P4P initiative must work within existing resources. The Department of State raised concerns that the report generally overstates the strength of opposition to NAFTA in Mexico. However, we do not believe we have overstated the opposition to NAFTA in Mexico. As noted in the report, U.S. and Mexican officials expressed concerns about how negative perceptions of NAFTA may impact successful implementation of the agreement. In addition, the report recalls the difficulties experienced in Mexico in anticipation of tariffs elimination under NAFTA in 2003. In its letter, USDA expressed readiness to work with the Department of State and with other agencies, under P4P, to develop collaborative efforts to support Mexican rural development and facilitate the continued and successful implementation of NAFTA. The Department of State, USDA, USTR, OPIC, and FDA also suggested clarifications, technical corrections, and elaboration of certain points which we have incorporated into this report, as appropriate. USAID comments were incorporated in the formal letter from the Department of State. We also obtained comments on key sections of the report from the Mexican Ministry of the Economy (SE), the Ministry of Agriculture (SAGARPA), and Mexico’s rural lending institution for small and medium farmers (Financiera Rural). SE and SAGARPA submitted joint comments. While commending the overall positive portrayal of the U.S.–Mexican agricultural trade relationship, SE and SAGARPA expressed concern that the report did not sufficiently underscore the importance of the Mexican market for U.S. exports under NAFTA. They cited U.S. trade data to illustrate the dramatic growth in certain U.S. commodity exports to Mexico since NAFTA has been in effect. They noted that Mexico is the largest foreign market for U.S. beef and rice and the second largest foreign market for U.S. corn, pork, poultry, and apples, some of the commodities our report highlights to illustrate the effects of Mexican trade measures. Additionally, SE and SAGARPA commented that our report did not provide a sufficiently detailed objective analysis regarding the nature and validity of various Mexican trade measures. These agencies expressed concern that the report unfairly portrays various Mexican trade measures without an adequate evaluation of the facts behind Mexico’s implementation of these measures, such as the scientific support for certain SPS requirements, and the legitimate findings of antidumping investigations. SE and SAGARPA also objected to the report’s reliance on the testimony of parties directly impacted by these measures. Similarly, SE and SAGARPA expressed disappointment that the report does not examine U.S. trade measures that impact Mexican agricultural exports to the United States, which parallel many of the difficulties faced by U.S. agricultural exports to Mexico. Finally, SE and SAGARPA also stressed that the debate over the impact of NAFTA on the Mexican rural economy does not have any substantive implications for the implementation of Mexico’s obligations under the agreement. GAO fully recognizes, and our report documents, the vital importance of the Mexican market for U.S. agricultural exports. We note the rapid growth in the value of U.S. agricultural exports to Mexico, which grew on average 17.4 percent annually and almost doubled from 1993 to 2003. We also point out that Mexico is the third largest market for U.S. agricultural exports and that its share of the U.S. agricultural export market has risen from 8 percent in 1993 to about 13 percent in 2003. Regarding the concerns raised by SE and SAGARPA about the nature of GAO’s analysis, we believe the report presents a balanced and objective description of key Mexican trade measures that affect U.S. agricultural exports to Mexico. Consistent with GAO’s overarching mission to help improve the performance and accountability of U.S. government programs and activities, our report provides recommendations to the Department of State and USDA to help ensure the successful implementation of NAFTA. Since it is outside GAO’s jurisdiction to audit foreign government programs and procedures, our treatment of Mexican trade measures is descriptive not evaluative. We include testimonial, as well as other evidence, in our report in order to illustrate the positions of various parties. Throughout the report we have included the views of responsible Mexican officials and have added clarifications to the report in response to specific comments made by these Mexican agencies. For example, we added language to the report to clarify that the existence of a case under dispute settlement proceedings does not necessarily mean a trade partner’s actions violate the provisions of NAFTA or other trade agreements. Similarly, we eliminated references to difficulties related to labeling requirements and import permits, which, as USDA officials have acknowledged, have not been used frequently by Mexico. Instead we focused only on Mexico’s tax on beverages containing nonsugar sweeteners. In addition, our report covered a number of areas including collaborative activities of U.S. agencies in Mexico and concerns about the long-term success of NAFTA, as well as Mexican trade measures that impact U.S. agricultural exports to Mexico. While we are aware that Mexican agricultural exports to the United States also encounter challenges meeting U.S. import requirements, these issues were outside the scope of this project. We have included language clarifying the scope of our work in this report. Regarding the point raised by SE and SAGARPA on Mexico’s determination to proceed with the implementation of NAFTA, our report does not question the commitment of Mexican authorities to fulfill their obligations under the agreement. However, both U.S. and Mexican officials have expressed concerns about how negative perceptions of NAFTA may impact successful implementation of the agreement. Some of these officials recalled the difficulties experienced at the time of the 2003 tariff eliminations, including mass demonstrations against NAFTA, calls for a moratorium on implementation of the agreement, and pressure to renegotiate the agricultural provisions of NAFTA. We believe that in accordance with U.S. government pronouncements regarding the importance of NAFTA for U.S. farm interests, it is appropriate for U.S. agencies to actively plan to support the successful implementation of the agreement. In addition to these broader comments on the report’s presentation and approach, SE and SAGARPA provided technical comments and clarifications on Mexican agricultural programs, such as clarification on PROCAMPO payments, and on the crops included under the Direct Payments for Target Income subprograms. We have made a number of changes in the report to reflect their comments. Financiera Rural had only one technical comment on our representation of that agency’s strategic plan, which we have incorporated into our report. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies of this report to appropriate congressional committees and to the U.S. Trade Representative and the Secretaries of the Departments of Agriculture and State. Copies will be made available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4347 or at [email protected]. Other GAO contacts and staff acknowledgments are listed in appendix VIII. To obtain information about the progress made, as well as difficulties encountered, in gaining market access for U.S. agricultural exports to Mexico, we reviewed the commitments in the NAFTA, including the tariff elimination schedules for agricultural products. We reviewed official documents related to various phases in the implementation of NAFTA and met with USDA and USTR officials to document progress made on each phase of tariff elimination. We studied trade flows to track changes in U.S. agricultural exports to Mexico, both at the aggregate level and at the product level using USDA’s Foreign Agricultural Trade of the United States database. We discussed the limitations and reliability of the trade data with USDA officials and determined the trade data reported by USDA are sufficiently reliable for the purpose of this report. We used various price indexes to adjust the trade value for inflation to convert trade values to constant 2003 dollars. We reviewed USDA publications on the Mexican market for U.S. agricultural products, and we reviewed studies by U.S. government and academic sources on the impact of NAFTA on U.S. exports to Mexico. We met with officials from USTR, USDA, and various producer groups to ascertain the progress and the difficulties in market access for U.S. agricultural exports to Mexico. We obtained from USTR a list of trade disputes with Mexico since NAFTA and reviewed WTO and NAFTA documentation on these agricultural trade dispute settlement cases. While we describe Mexico’s use of trade measures, we did not evaluate the validity of their application. To illustrate the scope and type of market access issues faced by U.S. agricultural exports to Mexico, we selected seven commodities to analyze and present as case studies. Our analysis and criteria for selecting the commodities is presented in appendix II. In order to review how Mexico has responded to the challenges and opportunities presented by free trade in agriculture and explore remaining challenges to the successful implementation of NAFTA, we reviewed relevant studies and research prepared by the Mexican Ministry of Agriculture (Secretaría de Agricultura, Ganadería, Pesca y Alimentación– SAGARPA), the World Bank, the United Nations Food and Agriculture Organization, and USDA. We conducted an extensive literature search, screening the results to identify the most appropriate research and studies. We considered various screening criteria including source, timing, and venue of publication. We cross checked key conclusions in various studies to assess their credibility. We reviewed the methodologies described for the studies we report on to determine their limitations. We also interviewed several authors of key studies we used in our report to clarify our understanding of their methodology and their conclusions. Finally, we discussed the conclusions of these studies with other experts including agricultural researchers and U.S. and Mexican government officials with expertise in the area of Mexican agriculture. We obtained data from SAGARPA and the Mexican National Institute of Statistics, Geography, and Information Technology (Instituto Nacional de Estadisticas, Geografía, e Informática) on agricultural production. We did not assess the reliability of the production data; however, the general trend of production is consistent with what is widely reported in other studies. We reviewed official Mexican government documents and other studies, which describe the major agricultural policies in Mexico since early 1990s. We interviewed current and past SAGARPA officials and the officials from the Ministry of the Economy (Secretaría de Economía–SE), who are familiar with current agricultural programs and the evolution of these programs under NAFTA. We obtained information from USDA agencies (FAS, APHIS, ERS, NASS, ARS, FSIS, and AMS) and from FDA on agriculture-related collaborative activities they have undertaken in Mexico for the 10 years that NAFTA has been in effect (1994 through 2004). This information included activity descriptions and funding by agency. To assess the quality and reliability of the data submitted by each agency, we interviewed the agency officials responsible for the data and reviewed the data provided. When we noted discrepancies or gaps in the data, we discussed these with the agency officials and obtained corrections and/or clarifications. Based on our work, we determined that the data were sufficiently reliable to portray overall levels of expenditures and the nature of these activities. For USDA agencies, we compiled this data in a set of tables presented in appendix IV. These tables reflect funding for activities implemented by these agencies from 1994 through 2004; however, some of the agency activities started before 1994, while others were concluded before 2004. For FDA we present a summary description of agency activities in the same appendix. We met with State Department officials in Washington, D.C., and U.S. embassy officials in Mexico to discuss U.S. efforts under the Partnership for Prosperity (P4P). We reviewed documents from the Department of State on P4P including the 2002 and 2003 P4P reports to Presidents Bush and Fox, the P4P Action Plan, testimonies by State officials, and press releases on P4P activities. In order to report on P4P activities related to agriculture or rural development, we discussed agency plans and ongoing activities with USDA, U.S. Agency for International Development, and Overseas Private Investment Corporation officials. We also discussed the impact of P4P with Mexican government officials from SAGARPA, the Mexican Ministry of the Economy (SE), the Mexican Ministry of Foreign Affairs (Secretaría de Relaciones Exteriores), and Mexico’s rural lending institution for small and medium size farmers (Financiera Rural). We conducted our review from February 2004 through February 2005 in accordance with generally accepted government auditing standards. To illustrate the range of market access barriers faced by certain U.S. agricultural exports to Mexico, we selected seven products to analyze and present as case studies: apples, beef, corn, high-fructose corn syrup (HFCS), pork, poultry, and rice. Each of the case studies includes a brief background and history of the exported product’s experience accessing the Mexican market, a description of the types of market access barriers each product faces, and a summary of the current status of market access issues. We selected commodities as representative of (1) products at various stages of the tariff elimination schedule; (2) different agricultural sectors— for example, grains (rice), horticultural products (apples), and meat (pork); (3) products that face varying types of tariff and nontariff barriers; (4) the range of mechanisms used in attempting to settle market access disputes; and (5) varying levels of export volume and value. Information presented in the case studies is based on our analysis of trade data, review of U.S., Mexican, WTO, and NAFTA official documents, and interviews with U.S. and Mexican government officials and various private sector representatives. Prior to NAFTA, Mexico restricted access to its fresh apple market through import licensing requirements and the application of a 20 percent tariff. In 1991, Mexico eliminated the licensing requirements. As part of its NAFTA commitments, Mexico established TRQs on apples, which were to be phased out over a 9-year period and result in duty-free access for U.S. apple imports by 2003. USDA reports that U.S. apple exports to Mexico have exceeded these specified TRQ amounts in each of the years following NAFTA’s implementation. The United States is the world’s leading apple producer, and apples comprised the largest portion of fruit exports to Mexico in 2003. U.S. apple exports to Mexico accounted for nearly 23 percent of U.S. worldwide apple exports. Between 1994 and 2003, the total quantity of fresh apple exports to Mexico increased by an average of 4.7 percent annually, and the value of exports totaled nearly $71 million in 2003 (see fig. 3). A key market access issue for U.S. apple exporters is the way Mexico has sought to exercise oversight for the application of its phytosanitary requirements. Mexico requires phytosanitary certificates for U.S. apples due to concerns about apple maggots in shipments. According the USDA’s Economic Research Service, most countries accept U.S. systems approaches for pest management as adequate protection against the threat of apple maggot. Mexico, however, requires that apples undergo a process called “cold treatment” before U.S. apple shipments can be imported into Mexico. Additionally, Mexico required that the Mexican government inspect and certify U.S. storage and treatment facilities. The treatment and inspection process increased U.S. producers’ cost of exporting apples to Mexico. In 1998, Mexico turned over supervision of the inspection program to USDA. Nevertheless, according to the U.S. Apple Association, some apple-producing states have been effectively shut out of the Mexican apple market because of the prohibitive treatment and certification costs. For example, the association representative noted that producing states like Pennsylvania, the fourth largest apple-producing state in the country, cannot recoup the “hundreds of thousands of dollars” of costs incurred through these inspections. In addition to Mexico’s phytosanitary treatment and certification requirements, Mexico initiated an antidumping investigation against U.S. apples in 1997 and imposed a preliminary import duty of more than 100 percent on Red and Golden Delicious apples. In 1998, the U.S. apple industry and the Mexican government signed an agreement suspending this duty, and the U.S. industry agreed to comply with a minimum-price scheme. U.S. apple exports to Mexico declined in 1998 (when the antidumping duty was in place) but experienced large, successive increases in 1999, 2000, and 2001 under the price agreement. However, in August 2002, the minimum price scheme was dropped at the request of Mexican growers, and Mexico resumed the dumping case and imposed antidumping duties of more than 45 percent on U.S. apples. As a result, U.S. exports decreased in 2002 and 2003. According to the U.S. Apple Association, the timing of the Mexican imposition of the dumping duty was notable, since NAFTA’s tariff rate quota and duty on apples were to be lifted on January 1, 2003. For this reason, the association noted that many U.S. apple exporters question the merits of the dumping allegations and maintain that Mexico is inappropriately restricting market access in order to protect its domestic industry. U.S. apple industry representatives note that Mexico’s policies restrict U.S. producers’ access to Mexico’s market. The U.S. apple industry notes that the treatment certification process takes several years and can be prohibitively costly in U.S. states where there are fewer producers to share costs. Furthermore, the U.S. apple industry is very fragmented, which is a significant challenge in dealing with market access problems in Mexico. For example, even though producers find the certification process burdensome, the industry does not have a joint strategy on how to address this problem. In 1992, 2 years prior to NAFTA’s implementation, Mexico raised tariffs on imported beef from zero to 20 percent. Per NAFTA, Mexico immediately eliminated these tariffs on imports of most U.S. beef products, and U.S. beef exports to Mexico increased. The recession that followed the 1994 peso crisis caused U.S. beef exports to Mexico to drop sharply by 1995, and exports did not recover fully until 1997. U.S. beef exports have grown steadily since 1995, and USDA notes that this increase is linked partially to the continuing improvements in the Mexican economy. Between 1994 and 2003, the volume of U.S. beef exports to Mexico increased by an average of 21 percent annually, and beef exports to Mexico accounted for 22.4 percent of the volume of U.S. beef exports worldwide (see fig. 4). The value of exports to Mexico in 2003 totaled $604 million. Although the volume of U.S. exports to Mexico has been increasing steadily over the past 10 years, market access for U.S. producers has been affected by antidumping actions and a ban on U.S. beef following the discovery in the United States of one cow (originally imported from Canada) with bovine spongiform encephalopathy (BSE) or “mad cow disease.” First, in 1994, the Mexican National Livestock Association initiated an antidumping case against certain types of beef imports by claiming discriminatory pricing on the part of U.S. exporters. Following industry-to-industry negotiations, the U.S. National Cattlemen’s Beef Association and the Mexican National Livestock Association signed a memorandum of understanding that formalized an agreement to (1) share U.S. technologies with Mexican producers and (2) coordinate both groups’ efforts to promote beef consumption in Mexico. As a result, the Mexican National Livestock Association dropped the dumping petition. However, in 1998 charges were made once again that the United States was dumping beef in Mexico. On August 1, 1999, Mexico announced antidumping tariffs that varied by company. Individual U.S. beef exporters appealed these tariffs, and on October 10, 2000, Mexico published a set of revised antidumping tariffs for certain beef exporters. These duties range from zero to 80 cents per kilogram, depending on the company and the type of beef. On June 16, 2003, the United States requested WTO consultations on Mexico’s antidumping measures on rice and beef, as well as certain provisions of Mexico’s Foreign Trade Act and its Federal Code of Civil Procedure. In addition, a NAFTA Chapter 19 panel is expected to rule shortly on whether these duties were applied in accordance with Mexican law. According to the National Cattlemen’s Beef Association, the root of the beef trade dispute in Mexico lies in the lack of differentiation between the values for various cuts of meat. In Mexico, the different cuts of beef generally all have the same value, whereas in the United States different cuts of beef have different values. These different values have led to antidumping cases against the United States because any commodity that sells for less than the value of the product in the home country is considered dumping. According to the National Cattlemen’s Beef Association representative, demand for variety meats (such as tripe and liver) is significantly higher in Mexico than it is in the United States. Because of these demand conditions, U.S. exporters can sell variety meats at a lower price, which leads Mexico’s industry to believe the United States is dumping these products on the Mexican market. In addition to facing dumping duties, the detection of one case of BSE in the United States in December 2003 led Mexico to impose a ban on all U.S. beef products. In March 2004, Mexico was the first country to reopen its market to certain types of U.S. beef products (U.S. boxed beef under 30 months of age), expanding the list of allowable beef products in April 2004, and USTR reports that the U.S. government is working to re-open the remainder of the market as soon as possible. According to producer group officials, market access for U.S. beef exports to Mexico has generally been very good, as evidenced by overall increases in trade. Both U.S. and Mexican industries plan to continue working together to resolve any potential trade disputes through industry negotiations. USTR notes that U.S. and Mexican beef and cattle industries are increasingly integrated, with benefits to producers, processors, and consumers in both countries. Corn is an important commodity in Mexico; in addition to being a dietary staple, white corn is the principal crop for many Mexican small farmers, and historically corn production is a fundamental feature of Mexican rural culture. Consequently, NAFTA negotiations regarding the phase-out of import barriers for corn were particularly sensitive. Prior to NAFTA, Mexico restricted access to its corn market through import licensing requirements, and there was no guaranteed level of access for U.S. imports. During NAFTA negotiations, it was widely believed in Mexico that immediate increases in imports of U.S. corn would displace Mexican corn producers. As a result, NAFTA negotiators agreed to allow Mexico to replace its import licensing requirements with transitional TRQs that will be phased out over a 14-year period—the longest transition period set forth in the agreement. The United States has been one of the major foreign suppliers of yellow (feed) corn to Mexico, and U.S. exports to Mexico comprised 13 percent of all U.S. corn exports worldwide in 2003. Between 1994 and 2003, the volume of U.S. corn exports to Mexico increased by an average of 18.5 percent annually (see fig. 5). The value of exports to Mexico in 2003 totaled $651 million. Although Mexico’s removal of restrictive import licensing requirements did away with a significant barrier to U.S. access to Mexico’s corn market, a number of other factors have affected U.S. exports before and after NAFTA’s implementation. For example, in the early 1990’s, Mexico lifted a ban on using corn to feed livestock, which immediately increased demand for imports of yellow corn from the United States, which had been declining for several years. In 2003, yellow feed corn exports comprised more than 80 percent of U.S. corn exports to Mexico. Additionally, in the years following NAFTA, Mexico has usually allowed higher levels of imports than are required under the NAFTA TRQs in order to ensure that domestic demand for corn is fully met. Thus, Mexico has generally applied much lower tariffs on these additional quantities than those set forth under the agreement. These more liberal market access policies for yellow (feed) corn imports are driven in part by a need to provide feed for Mexico’s expanding livestock industries. Notwithstanding these policies toward feed corn imports, a USDA analysis of Mexico’s corn market notes that imports of white corn (i.e., corn generally used directly for human consumption) from the United States have declined since 2000, partly because the Mexican government has provided marketing funds to domestic producers of white corn. Additionally, USDA reports that in a significant departure from past practice, Mexico levied the NAFTA-specified above quota tariff rate of 72.6 percent on white corn in 2004. Mexico’s tax on beverages sweetened with HFCS has also contributed to the decline in U.S. corn exports to Mexico. The tax has depressed Mexican production of HFCS, which is made from imported corn. U.S. exports of corn to Mexico are expected to increase significantly as Mexico eliminates the transitional TRQs in 2008. However, some industry groups noted concern about Mexico taking other steps to protect its sensitive domestic corn market. For example, one U.S. industry representative noted that it will be important for the U.S. government to ensure that Mexico does not use SPS requirements as a barrier to U.S. imports. On the other hand, other observers note that an expanding economy in Mexico will increase consumer demand for meat and, in turn, continue to increase demand for U.S. corn imports as feed for Mexican livestock production. Additionally, certain farm groups in Mexico have argued that allowing duty- free imports of U.S. corn will lead to a total collapse of Mexican agriculture, and they have vowed to mount an unprecedented campaign to stop the last round of tariff eliminations. Mexican politicians who oppose NAFTA note the continuing economic distress in rural areas of Mexico and insist on renegotiating the agricultural provisions of the agreement to improve the conditions of Mexican farmers. Although the total elimination of already low Mexican tariffs on corn may not have much economic significance for U.S. producers, failure to comply with the final phase of tariff elimination may undercut support for NAFTA among U.S. producers who were in favor of the agreement with the expectation that it would lead to genuinely free trade. Furthermore, U.S. trade officials have expressed serious reservations about any attempt to renegotiate the agricultural provisions of NAFTA because it could lead to demands to renegotiate other aspects of the agreement and undermine the agreement as a model for trade liberalization throughout the Western Hemisphere. Impediments confronted by U.S. HFCS exports to Mexico are related to difficulties encountered by Mexican cane sugar exports to the United States. Trade friction between the United States and Mexico over HFCS came to a head in 1997, when Mexico initiated an antidumping investigation of U.S. exports of this product. Based on the results of this investigation, Mexico imposed antidumping duties beginning in 1998. This triggered a lengthy WTO dispute settlement proceeding, in which the United States eventually prevailed in 2001. Thereafter, Mexico eliminated its antidumping duties but imposed a tax on beverages made with any sweetener other than cane sugar, including HFCS. The United States has challenged Mexico’s beverage tax in the WTO, and that dispute is still pending. Mexico defends its beverage tax, noting that the United States has not complied with its market access commitments with respect to Mexican cane sugar. However, the U.S. government has rejected Mexico’s arguments linking these two issues. As shown in figure 6, U.S. exports of HFCS began to decline in 1999 after Mexico imposed the antidumping duties, and dropped to nearly zero after Mexico imposed the beverage tax in 2002. Market access issues began in 1997 when Mexico imposed preliminary antidumping duties on U.S. exports of HFCS. In 1997, Mexico’s National Chamber of Sugar and Alcohol Industries, the association of Mexico’s sugar producers, filed a petition in which it claimed that U.S. HFCS was being sold in Mexico at less than fair value and that these imports constituted a threat of material injury to Mexico’s sugar industry. As a result of these claims, the Mexican Ministry of the Economy responded by imposing antidumping duties on U.S. HFCS. In 1998, USTR invoked a WTO dispute proceeding to challenge Mexico’s action, and in 2000, a WTO panel ruled that Mexico’s imposition of antidumping duties on U.S. imports of HFCS was inconsistent with the requirements of the WTO Antidumping Agreement. At that time, Mexico agreed to implement the panel recommendation by September 22, 2000. However, on September 20, 2000, Mexico issued a new determination and concluded that there was a threat of material injury to the Mexican sugar industry and that it would maintain the antidumping duties. The United States maintained that Mexico’s new determination did not conform to the WTO panel’s recommendations and challenged this new determination before a WTO compliance panel. The WTO compliance panel agreed with the U.S. position. Mexico appealed this ruling. The WTO Appellate Body agreed with the compliance panel’s conclusions and recommended that Mexico comply with its obligations under the WTO Antidumping Agreement. While Mexico revoked its antidumping duties on HFCS in April 2002, in January of that year the Mexican Congress imposed a 20 percent tax on soft drinks and other beverages that use any sweetener other than cane sugar, which effectively shut out U.S. HFCS from the Mexican market. The Fox administration acted to suspend the beverage tax from March 6 through September 2002. Mexico’s Supreme Court, however, ruled the suspension to be unconstitutional and reinstated the tax effective July 16, 2002. The United States argues the HFCS beverage tax is inconsistent with Mexico’s obligations under the WTO, which calls for treating imported products no less favorably than comparable domestic products. The United States considers that the beverage tax is inconsistent because it applies to beverages sweetened with imported HFCS, but not to products sweetened with Mexican cane sugar. In June 2004, the United States challenged Mexico’s beverage tax in the WTO. The dispute over Mexico’s beverage tax is pending before a WTO panel. The sugar industry would like to negotiate a resolution to the sweetener dispute. At this time, private meetings have taken place between sugar producer groups in the United States and Mexico, and the industries are working to reach a resolution before 2008. Prior to 1994, Mexico levied a duty of 20 percent on U.S. pork, but under NAFTA, Mexico agreed to establish TRQs to be phased out over a 9-year period that ended on January 1, 2003. For several categories of pork products, U.S. pork exports to Mexico greatly exceed the quantitative limits of the TRQs, and Mexico generally allowed the additional product to enter without applying the over-quota tariff. Additionally, NAFTA permitted Mexico to establish a special agricultural safeguard tariff rate quota for certain cuts of pork, under which Mexico can apply higher tariffs if imports of that product exceed specified levels. If imports rise above that level, the duty reverts to the lower of the current Most Favored Nation or pre-NAFTA levels. The safeguard levels expanded 3 percent each year until the provision expired on January 1, 2003. U.S. pork exports to Mexico have increased significantly since NAFTA, with the total volume of U.S. exports rising by an average of 18.5 percent annually between 1994 and 2003 (see fig. 7). Exports to Mexico accounted for 22.3 of U.S. pork exports worldwide, and U.S. exports to Mexico totaled about $217 million in 2003. In November 2002, Mexican producers submitted a dumping complaint to the Mexican government, alleging that U.S. exporters were engaging in price discrimination by selling pork to Mexican buyers at lower prices than they would sell to buyers in other countries. On January 7, 2003, Mexico initiated the antidumping investigation against U.S. pork. According to U.S. pork producers, the Mexican association that requested the investigation does not represent the Mexican pork industry, and, therefore, did not have a legal right to make the request. The producers of pork in Mexico—the slaughterhouses and the packers—stated that they do not want the investigation to proceed and asked that it be terminated. On May 28, 2004, the Mexican government terminated the January 2003 investigation and initiated a more limited antidumping investigation on hams only. Even after the antidumping case was filed against U.S. pork, Mexico continued to be the second major market for U.S. pork exports. Furthermore, USDA officials stated that any decreases in pork exports due to the case were more than offset by the increase in demand for pork following Mexico’s ban on U.S. beef products after a case of BSE was discovered in the United States. In addition, USDA noted that demand for U.S. pork exports to Mexico correlates closely to income growth in that country (i.e., the rise of the middle class). Thus, while Mexico’s tariff reductions have been an important contributing factor to the growth of U.S. pork exports to Mexico, the far more significant drivers of export growth have been the rapid recovery of the Mexican economy following its recession in 1995 and continuing income and economic growth since then. The U.S. government has questioned the basis of the May 2004 ham antidumping investigation. Furthermore, USTR asserts that the United States is actively working to prevent potential actions that Mexico may take on exports of U.S. pork. USTR officials believe that Mexico’s January 2003 initiation of a pork dumping investigation and a May 2004 initiation of a ham dumping investigation may violate WTO rules and questions the statistics being used by the Mexican government to determine the level of imports. USTR has engaged the Mexican government to terminate the ham- dumping investigation, to resolve differences on trade statistics, and to seek alternatives to trade restrictive measures. Despite the antidumping dispute, Mexico and the United States have pledged to build on their long history of cooperation regarding swine and pork bilateral trade on the basis of equal and mutual benefit. Prior to NAFTA, Mexico restricted access to its poultry market through import licensing requirements and 10 percent tariffs on imports. As with other products subject to import licensing, Mexico replaced these barriers with TRQs as part of its NAFTA commitments. NAFTA called for the TRQs to be phased out over a 9-year period, with duty-free access for U.S. poultry by 2003. Per NAFTA, the larger portion of the tariff cuts was to be implemented in the latter half of the phase-out period—a process referred to as “backloading.” Mechanically deboned meat, which is used by Mexican sausage manufacturers, comprises the most significant portion of U.S. poultry exports to Mexico. Since NAFTA, the Mexican government has chosen not to impose the above-quota tariff on this commodity due to the Mexican sausage industry’s high demand for the product, and, as a result, U.S. exports have routinely exceeded the TRQ levels set forth in the agreement. Between 1994 and 2003, imports of U.S. dark meat chicken parts have also generally exceeded the transitional TRQ levels. The United States is the major foreign poultry supplier to Mexico’s market, and Mexico is typically among the top three markets worldwide for U.S. poultry exports. From 1994 to 2003, the volume of U.S. poultry meat exports to Mexico increased by an average of 5.7 percent annually (see fig. 8). U.S. exports to Mexico accounted for 11.4 percent of U.S poultry meat exports worldwide, and the value of U.S. poultry exports to Mexico totaled about $260 million in 2003. Demand for certain U.S. poultry products in Mexico was driven, in part, by insufficient domestic poultry production in Mexico. Additionally, because U.S. domestic demand for dark meat is low relative to Mexico’s consumer demand, U.S. producers have been able to keep dark poultry meat prices relatively low and thus attractive to Mexican buyers. Over the years since NAFTA’s implementation, Mexico’s domestic poultry industry has expanded, and concern about U.S. competition among Mexican producers has increased commensurately. As the end of Mexico’s transitional TRQ on poultry products drew near in 2002, the Mexican poultry industry petitioned the Mexican government to apply a safeguard on imports of U.S. chicken leg quarters. The petitioners argued that the end of the TRQ would result in an import surge from the United States and injury to Mexico’s domestic industry. Article 703 of NAFTA would have permitted Mexico to impose duties of up to 240 percent on U.S. poultry imports, if NAFTA’s conditions for a safeguard were met. Rather than face such potentially high tariffs and a disruption to U.S. exports, U.S. producers, in industry-to-industry negotiations with the Mexican petitioners, agreed to a more favorable regime. In July 2003, Mexico issued a final safeguard determination that imposed a TRQ which allows the quota to expand each calendar year through 2007, at which point the duties will be eliminated. The within-quota duty is zero, and the initial over-quota duty was 98.8 percent, which declines each year until reaching zero on January 1, 2008. The U.S. and Mexican governments agreed on a package of compensation measures in response to the safeguard. In particular, Mexico agreed not to impose any other restrictions on U.S. poultry products and to eliminate certain SPS restrictions. The U.S. government also agreed, following consultations with U.S. industry, to consent to Mexico’s application of the safeguard past the expiration of the transition period. Some poultry industry representatives noted that settlement of the poultry safeguard issue brought some initial criticism from other U.S. producer groups, who maintained that the settlement set a precedent for Mexico to force renegotiation of its NAFTA commitments. However, USTR officials stated that the United States will not consider any renegotiation or rescission of Mexico’s NAFTA commitments and views the poultry settlement as a unique workable solution that forestalled possible significant disruption to U.S. exports. They doubted a similar outcome could be achieved in other industries. USDA reports that domestic poultry production in Mexico continues to expand. USDA and industry representatives said that the additional protection for Mexican producers established under the safeguard settlement will provide Mexican producers additional time to prepare for free trade. USDA also notes that demand for poultry, combined with an expanding Mexican economy and a removal of the ban on some U.S. poultry exports, will continue to increase demand for U.S. poultry products. Nevertheless, some U.S. industry representatives remain concerned and noted that once the TRQ expires, Mexican authorities may employ other measures, such as sanitary restrictions, as a means to constrain U.S. access to Mexico’s market. The United States is the primary supplier of rice to Mexico, mostly due to the fact that Mexico has banned or placed strict phytosanitary standards on imports of rice from Asian countries since the early 1990s. The United States exports both rough (i.e., unprocessed) rice and milled (i.e., processed) rice to Mexico, although demand for rough rice is much higher. As a result of the lack of supply from Asian producers and the high demand for rough rice, rough rice accounted for about 90 percent of the total volume of U.S. rice exports to Mexico in 2003. Prior to NAFTA’s implementation, Mexico levied duties of 20 percent on brown and milled (i.e., processed) rice and 10 percent on rough (unprocessed) rice. Under NAFTA, Mexico agreed to phase out rice tariffs over a 9-year period, with all tariffs to be eliminated by 2003. With the phasing out of tariffs on rice, the volume of U.S. exports has increased by an average of 14.4 percent annually from 1994 to 2003 (see fig. 9). U.S. rice exports to Mexico accounted for 17.7 percent of U.S. rice exports worldwide, and exports to Mexico totaled about $140 million in 2003. In December 2000, Mexico initiated an antidumping investigation on imports of long-grain milled rice from the United States. Mexican rice millers (who process rice that competes with U.S. milled rice imports) alleged that U.S. milled rice is being sold in Mexico at a prices less that its fair market value. The Mexican government subsequently levied antidumping duties in April 2000 and June 2002 on specific U.S. rice imports. A U.S. rice industry representative told us that the U.S. rice industry attempted to resolve the issue through the industry-to-industry negotiations but that the negotiations were unsuccessful. Following the industry negotiations, the United States formally requested WTO consultations with Mexico in June 2003. These consultations were held from July 31 through August 1, 2003, on the basis of concerns regarding Mexico’s methodology for determining injury to the domestic market and for calculating dumping margins. WTO consultations failed to resolve the issue, and in February 2004 a WTO dispute panel was formed to resolve the case. The U.S. rice industry representative said that several other U.S. commodity groups were supporting this case in the WTO because the case deals with broad issues related to Mexico’s application of the antidumping law that could affect their exports as well. A ruling on the WTO dispute is expected in April 2005. Notwithstanding the outcome of the case, U.S. rice exporters generally benefit from preferential access under NAFTA and Asian exporters’ restricted access to the Mexican market. USDA reports indicate that U.S. exporters could face increased competition in the milled rice market in Mexico should Asian exporters satisfactorily meet Mexico’s phytosanitary concerns. Recognizing the challenges and anticipating the opportunities that market reforms and free trade posed for its farm sector, the Mexican government has implemented several programs to help its farmers adjust to changing economic conditions. The three main support programs implemented since the early 1990s are PROCAMPO, marketing support, and Alianza. PROCAMPO (Programa de Apoyos Directos al Campo) Budget: PROCAMPO is the largest agricultural support program, accounting for 35 percent of Mexico’s Agriculture Ministry’s (SAGARPA) budget in 2003, around $1.27 billion. Goal: PROCAMPO is a 15-year program that provides transitional income support to Mexican agriculture as it undergoes structural changes in response to market conditions and the phasing out of trade barriers under NAFTA. The political objective is to manage the acceptability of the free trade agreement among farmers and to prevent extensive levels of poverty and out-migration. How it operates: The program makes payments on a per-hectare basis to any producer who cultivates a licit crop on eligible land or utilizes that land for livestock or forestry production or some ecological project. Eligible land is defined as that which has been cultivated with corn, sorghum, beans, wheat, barley, cotton, safflower, soybeans, or rice in any of the three agricultural cycles before August 1993. There are three types of PROCAMPO payments: preferential, traditional, and capitalized. Preferential payment is for producers with fewer than 5 hectares in nonirrigated lands who only produce in the spring-summer cycle. For the spring-summer 2003 agricultural cycle, the payment levels equaled 1,050 Mexican pesos ($100) per hectare. The traditional payment is for the rest of the producers. It was 905 pesos ($86) per hectare in 2003. The capitalized payment is made under certain conditions to producers who request the sum of their future PROCAMPO payments. Beneficiaries: During 2001, 2.7 million producers with a total of 13.4 million hectares received PROCAMPO payments. Around 75 percent of farmers in the PROCAMPO database have less than 5 hectares of land. Changes in the program: There was a proposal in November 2002, as part of a broader Mexican government initiative for rural support, to update the payments according to yields. However, this action was never put into practice. Another program will be created for producers who are not currently registered in PROCAMPO, who also may be considered for assistance to smooth out income fluctuations. Also, the National Agreement’s emergency spending proposal contains 650 million pesos ($62 million) for the inclusion of additional land on the PROCAMPO roster. According to Mexican officials, even where there are new producers enrolling, the total benefiting area has not changed because those new producers are filling the place left by former producers whose lands are no longer eligible to receive support. Impact: PROCAMPO has become an important source of some rural households’ income, and it may have income multiplier effects when recipients put the money they receive to work to generate further income. The Mexican government reported that between 1989 and 2002 incomes from agricultural businesses have lost importance, while other sources, such as government support programs, remittances, salaries, and wages, have increased their share in rural households’ income. Scholars have found payment from PROCAMPO has forestalled the income decline of subsistence farmers. In addition, scholars found that payment from PROCAMPO generated an income multiplier effect, which meant that the PROCAMPO payment was used productively and generated additional income for rural households. However, scholars believe that the level of payment from PROCAMPO was not large enough to offset the risks of switching to more profitable crops, which is part of the goals of the marketing support program (discussed below). Marketing Support and Regional Market Development Program (Programa de Apoyos Directos al Productor por Excedentes de Comercialización para Reconversión Productiva, Integración de Cadenas Agroalimentarias y Atención a Factores Críticos, formerly Programa de Apoyos a la Comercialización y Desarrollo de Mercados Regionales) Budget: The marketing support program is the second largest agricultural program. Marketing Support and Regional Market Development Program accounts for about 16 percent of SAGARPA’s budget. For 2003, the budget was around $580 million. Goal: The program supports various aspects of agro-marketing and commerce. The Agricultural Marketing Board (ASERCA) was created to substitute the traditional direct intervention that the government formerly made through a parastatal state trading enterprise for sorghum and wheat. How it operates: The program has seven subprograms: (1) direct payment to producers, (2) price supports, (3) collateral loans, (4) crop conversion, (5) other types of support, (6) slaughter house certification, and (7) special support for corn. The major subprogram is the direct payment to producers. This program provides payments to producers of rice, corn, wheat, sorghum, barley, canola, copra, peanuts, cotton, and safflower in certain areas, usually on a per-ton basis. Beneficiaries: Beneficiaries of the marketing support program on average have more land than PROCAMPO payment recipients. According to Mexican government documents, around 22 percent of the respondents to its annual survey of the marketing support program have fewer than 5 hectares, while almost half have more than 15 hectares. In 2004, the program supported 240,000 producers. Changes in the program: In 2003, Mexican farmers asked for support that would “mirror” what was provided U.S. farmers under the U.S. Farm Bill, which led the Mexican government to establish “target income” support. The new program has seven subprograms including direct payments for (1) target income, (2) slaughtering in certified slaughter houses, (3) accessing domestic forages, (4) crop conversion, (5) price hedging, (6) pledging, and (7) other specified activities. Additionally, barley, copra, and peanuts are no longer on support list. For a period of 5 years, the government plans to guarantee a target income, expressed per ton, for producers of certain grains and oilseeds. Nearly 17 billion Mexican pesos ($1.6 billion) have been designated for this program. In determining whether a producer has reached the target income, the government evaluates a producer’s income from selling on the market, and if the income from selling on the market falls short of the target income, the government will provide additional support to ensure that farmers’ incomes reach the set target. Under the former program, just a few states were able to request support, while the new program makes payments to producers with commercial surpluses in all states. Impact: The program has had an impact on crop patterns and migration. The “target price” program has led to concentration in basic crop production instead of crop diversification. Mexican officials hope the new “target income” approach will help farmers to be more responsive to the market conditions. A Mexican official document points out that the program is an important factor in mitigating migration from the countryside, but the document also recognizes that the program did not succeed in integrating farmers into the marketing chain. Thirty percent of the respondents to the program annual survey said they would have sought employment somewhere else if they had not received this assistance. A USDA study of grain production finds that the marketing supports, along with the constitutional reforms that allow the rental of ejidal lands, have facilitated the emergence of large-scale farms of corn and dried beans. Alianza (Alianza para el Campo) Budget: Alianza accounts for about 15 percent of SAGARPA’s budget, about $570 million in 2003. Goal: The goals of the programs are to boost agricultural productivity and promote the transition to higher value crops. The objectives include increasing producer income, improving the balance of trade, achieving an agricultural production growth rate higher than the population growth rate, and supporting the overall development of rural communities. How it operates: The programs were grouped under four categories: agriculture, livestock, phytosanitary, and technology transfers. Activities include better use of water and fertilizer, adoption of improved seeds, better disease and pest control practices, improved genetic quality of crops and livestock, improved cattle stocks, better health and sanitation practices, and pasture development and related infrastructure development for increased production. These programs are decentralized and are financed jointly by federal and state governments and producers. Beneficiaries: The evaluation done by the United Nations Food and Agriculture Organization (FAO) found that the program serves farmers with various socio-economic backgrounds, educational levels, ages, farm size, and income levels. The FAO evaluation also found that medium size producers have benefited the most from the agriculture program, and 24 percent of small farmers have benefited. Changes: In 2002, for the first time, general objectives were established for all the sub programs. These objectives are to (1) increase income, (2) diversify employment options, (3) increase investment in rural development, (4) strengthen producer group organizations, and (5) advance sanitary standards. To achieve these objectives, strategies were established to integrate standards, bring together regional producer groups, and discuss important issues such as land and water use. Also in 2002, there was recognition by the government of a need to transfer technology and investment to the rural sector. Impact: The FAO evaluation pointed out some benefits from Alianza. For example, technology helped certain areas get access to water. Alianza also created a forum to consolidate processes of participation and implementation of different policies for the agricultural sector, allowing the participation of the state and producers in the conversation. The same evaluation pointed out that the additional employment generated from the program was modest. While U.S. development assistance to Mexico has been limited, U.S. agencies have undertaken numerous collaborative efforts that benefit both U.S. and Mexican agricultural interests. Most of these efforts have been led by the United States Department of Agriculture (USDA), in conjunction with its Mexican counterparts, in support of overall agricultural production and trade objectives. USDA’s Foreign Agricultural Service officials noted that historically USDA has had a very strong collaborative relationship with Mexico’s Ministry of Agriculture. USDA’s Animal and Plant Health Inspection Service (APHIS) has invested more funds in collaborative efforts with Mexico than of any USDA agency, about $280 million, since NAFTA was implemented. Besides APHIS’s collaborative activities, six other USDA agencies—the Economic Research Service (ERS), the Agricultural Research Service (ARS), the Foreign Agricultural Service/International Cooperation and Development (FAS/ICD), the Agricultural Marketing Service (AMS), the Food Safety and Inspection Service (FSIS) and the National Agricultural Statistics Service (NASS)— have participated in agricultural collaborative projects in Mexico. However, funding for collaborative activities in Mexico from these agencies has been very modest, about $7.5 million combined over the past 10 years. In addition to collaborative efforts implemented by USDA agencies, the Food and Drug Administration (FDA) has also had a role in activities that benefit Mexican agriculture. In the course of fulfilling its responsibilities of protecting and promoting U.S. agricultural health, APHIS has collaborated with Mexico for over 50 years (see table 3). APHIS has also implemented programs that facilitate agricultural trade from Mexico, such as its preclearance programs. Furthermore, APHIS has been by far the U.S. agency that has invested the most money in agricultural collaborative efforts with Mexico, the bulk of it on its Medfly and Screwworm eradication programs. APHIS reported spending a total of about $286 million on its plant and animal health activities in Mexico since the implementation of NAFTA. Since 1996, ERS has spent $2.5 million in funding to implement the Emerging Markets Program to enhance Mexico’s capacity to collect, analyze, and disseminate agricultural information. ERS officials said that Mexico’s enhanced data-gathering and reporting capability also benefits the USDA because reliable information allows the agency to make better informed decisions on bilateral agricultural trade. For a full list and descriptions ERS activities, see table 4. In June 1998, ARS and Mexico’s Agriculture Research Institute, Instituto Nacional de Investigaciones Forestales, Agriclas y Pecuarias (INIFAP), signed a Letter of Intent to promote U.S.–Mexico collaboration in agricultural research programs. Since then, ARS has spent about $2.3 million on several collaborative projects involving ARS and Mexican scientists. According to ARS officials, it is important for the United States that scientists in Mexico have academic backgrounds similar to their American counterparts’ in order to reach common solutions to problems that impact agriculture in both countries. For a full list and descriptions ARS activities, see table 5. Over the past 10 years, FAS/ICD has spent a total of $1.8 million on its Scientific Cooperation Research Program (SCRP) and Cochran Fellowship Program (CFP). Under SCRP, U.S. and Mexican scientists have conducted joint research and scientific exchanges for over 20 years to help solve mutual food, agricultural, and environmental problems. Since NAFTA was enacted, SCRP has sponsored 32 joint agricultural research projects among U.S. and Mexican universities and other research institutions, of which about half have been related to trade. In addition, FAS administers CFP, which provides U.S.-based agricultural training opportunities for senior and midlevel specialists and administrators from the Mexican public and private sectors who are concerned with agricultural trade, agribusiness development, management, policy, and marketing. For a full list and descriptions of FAS/ICD activities, see table 6. AMS has spent about $548,200 since 1994 in collaborative activities with Mexico. Most of AMS activities consist of providing training to Mexican fresh fruit and vegetable inspectors to help them meet U.S. inspection standards. For a full list and descriptions of AMS agricultural collaborative activities, see table 7. NASS has been involved in a few collaborative activities in Mexico since 1997. Using the Emerging Markets Program, NASS has spent $361,000 to help improve the agricultural statistics system and methodology in Mexico. As part of this assistance, NASS provided training to analysts from Mexico’s agricultural statistics service, Servicio de Información y Estadística Agroalimentaria y Pesquera (SIAP). This training focused on methodology for preparing official agricultural statistics. For a full list and descriptions of NASS activities, see table 8. Since 2001, FSIS has implemented a small number of activities valued at $298,412 under the Emerging Markets Program in Mexico. Most of these activities consist of providing training and technical assistance to Mexican meat and poultry exporters to help them meet U.S. import regulations. For a full list and descriptions of FSIS activities, see table 9. In its efforts to protect U.S. consumers, FDA has also undertaken activities that benefit Mexican agricultural producers. FDA’s approach has been to work with Mexican government agencies to help them establish effective food safety regulatory, inspection, and enforcement infrastructure, focusing particularly on microbiological hazards. For example, if a food- borne disease outbreak resulting from a Mexican import occurs, FDA determines the cause and works with the Mexican government to try and resolve the problem and develop a system to prevent future outbreaks. FDA officials explained that in 1997 their agency launched its Food Safety Initiative (FSI) to improve the safety of the U.S. food supply, which includes imported foods. Because Mexico exports around $3 billion in fruits and vegetables to the United States each year, an important FSI component has been to help Mexican commodity exporters become more familiar with FDA regulatory requirements and to improve their ability to comply with U.S. food safety regulations. FDA activities under FSI have basically involved a series of training programs since 2002 for Mexican fruit and vegetable exporters, academics, and government officials. In addition to activities under FSI, FDA established the Southwest Import District Office in 1999 to enhance food inspection activities along the Mexican border. The Southwest Import District inspects imported goods entering the United States through the Mexican Border from Brownsville, Texas, to San Diego, California. During the last 4 years, FDA’s Center for Veterinary Medicine has also participated in training and assisted in the establishment of a program in four agricultural states of Mexico to monitor pathogens that are transmitted via contaminated food. FDA reported it has spent about $1.8 million for its activities related to agricultural production in Mexico since NAFTA went into effect. The Partnership for Prosperity (P4P) initiative has a few collaborative programs that are oriented towards agriculture. On the U.S. side, USDA’s FAS, OPIC, and USAID have played key roles in implementing the programs. Overall, P4P seeks to create a public-private alliance and develop a new model for U.S.–Mexican bilateral collaboration to promote development, particularly in regions of Mexico where economic growth has lagged and is fueling migration. No new funds were specifically allocated to P4P by either government since the program’s inception; instead, the U.S. government has sought to refocus resources already devoted to Mexico to create a more efficient collaborative network. According to State Department and USDA officials, since its establishment, P4P has become the “umbrella” under which development collaboration between the United States and Mexico takes place. USDA’s FAS has worked closely with several Mexican government agencies, including Mexico’s new rural lending institution, Financiera Rural, to incorporate P4P’s broader approach to rural development and assistance to small farmers. For example, FAS arranged for USAID to use its U.S. fellowship program to place one of its participants at Financiera Rural. Through this fellowship, Financiera Rural hosted a professor from the University of Minnesota who assisted the agency in developing a strategic plan to incorporate the new paradigm for rural development proposed in the P4P conferences, acknowledging that Financiera Rural is better suited to operate as a second-tier lender. This strategic plan calls for the development of rural financial lending intermediaries in Mexico, which will be fostered using a model that complies with Mexico’s legal framework, determined by a study to be conducted jointly by the Financiera Rural and the International Development Bank. The new strategic plan also calls for the agency to fund any productive endeavor in the countryside, not only agricultural production. Activities could include such things as eco-tourism, rural gas stations, and transportation services. According to Financiera Rural officials, the guidance provided by the USAID fellow has positively contributed to Financiera Rural operations because funding and access to these types of resources and knowledge are not otherwise available in Mexico. Furthermore, the fellowship has provided support in trying to resolve the issue of limited credit availability—one of Mexico’s most significant structural problems. According to U.S. Embassy officials in Mexico, one of the most significant accomplishments under P4P has been the bilateral agreement to allow the Overseas Private Investment Corporation (OPIC) to operate and provide financing in Mexico. OPIC’s mission is to help U.S. businesses invest overseas, to foster economic development in new and emerging markets, and to complement the private sector in managing the risks associated with foreign direct investment. According to OPIC officials, for over 30 years there had been resistance by the Mexican government to allow the agency to operate in Mexico because of concerns over sovereignty. Mexico did not want a U.S. government agency to provide loans in Mexico because that would mean that the agency could ask for collateral and possibly own Mexican property in the case of default on a loan. However, in 2003, an agreement was reached through P4P to allow OPIC to operate in Mexico. Since the bilateral agreement was signed, OPIC has begun to provide financing for five projects in Mexico, including one related to agriculture. For the agriculture-related project, OPIC approved a $3.3 million loan to Southern Valley Fruit and Vegetable, Inc., of Georgia to develop a new farming project in Mexico that will serve as a winter division of the company that will grow, package, and ship cucumbers, squash, eggplant, and zucchini. The project will employ approximately 300 laborers and professionals in an area of high unemployment. Southern Valley has committed over $2.2 million in equity to the project. OPIC officials indicated that they expect their lending portfolio to grow in Mexico. USAID plans to expand its activities in Mexico to support rural development. USAID officials explained that, overall, USAID has not had a large presence in Mexico, and historically funding for activities in Mexico has been limited. Furthermore, USAID activities in Mexico have typically been in the areas of population, democracy, governance, health, and micro- financing, instead of agriculture. However, in 2004 USAID received an added $10.2 million specifically for rural development in Mexico, which brought its budget to $32 million. USAID is now working with other U.S. and Mexican agencies to develop projects to assist rural areas of Mexico. In recent months USAID has initiated several activities targeting rural development including: Small Farmer Support/Rural Business Development: Through this activity, USAID award h is providing targeted business development and marketing services to agricultural producer organizations and cooperatives in the southern rural states of Oaxaca and Chiapas. Connecting Small Producers with Market Opportunities: In partnership with Michigan State University and USDA, USAID launched this activity in late 2004 designed to allow small and medium producers to better compete for opportunities in the mushrooming domestic market for food and produce. Rural Finance: In late 2004, USAID expanded what had been an urban- focused micro-enterprise finance program to include rural finance as a priority activity. University Partnerships: In 2004, USAID focused the ongoing Training, Internships, Exchanges, and Scholarships annual partnership competition on proposals that would spur agribusiness and other issues tied to rural economic growth. In August 2004, USAID awarded five new partnerships directly related to rural development. The following are GAO’s comments on the State Department’s letter dated March 16, 2005. 1. We revised title to make clear that we are not suggesting that Mexico has failed to implement its obligations under NAFTA’s agricultural provisions. 2. We do not believe that we overstate the opposition to NAFTA in Mexico. As noted in the report, U.S. and Mexican officials have expressed concerns about how negative perceptions of NAFTA may impact successful implementation of the agreement. In addition, the report recalls the difficulties experienced in Mexico at the time of tariffs elimination under NAFTA in 2003. 3. We changed language in the two locations of the report cited by the State Department to clarify that as a matter of course the United States has not committed to providing technical assistance to its post-NAFTA free trade partners. The report now states simply that the United States has recently provided such assistance. 4. The points about the P4P Initiative noted by the State Department are also mentioned in our report. We did not consider it necessary to make revisions to address these points. 5. In our recommendations we identify the Secretary of State as the head of one of the agencies taking the lead on P4P activities. We have added a footnote in appendix V on P4P activities to clarify the roles of the Departments of Commerce and Treasury. While these departments also have a leading role in P4P activities, they are not directly involved in activities related to rural development or the agricultural sector, and therefore our recommendation is not addressed to these agencies. 6. Our review was concluded by the time the Partnership for Prosperity working groups cited by the State Department had taken place. These developments may represent the first steps in addressing our recommendation. 7. We revised appendix V of the report to include key elements of the information provided on recent USAID activities. In addition to those listed above, Ming Chen, Francisco Enriquez, Matthew Helm, Sona Kalapura, Jamie McDonald, Marisela Perez, and Jonathan Rose made key contributions to this report. | In 1994, the North American Free Trade Agreement (NAFTA) created the world's largest free trade area and, among other things, reduced or eliminated barriers for U.S. agricultural exports to Mexico's vast and growing markets. As part of a body of GAO work on NAFTA issues, this report (1) identifies progress made and difficulties encountered in gaining market access for U.S. agricultural exports to Mexico; (2) describes Mexico's response to changes brought by agricultural trade liberalization and challenges to the successful implementation of NAFTA; and (3) examines collaborative activities and assesses strategies to support Mexico's transition to liberalized agricultural trade under NAFTA. U.S. agricultural exports have made progress in gaining greater access to Mexico's market as Mexico has phased out barriers to most U.S. agricultural products, and only a handful of tariffs remain to be eliminated in 2008. Total U.S. agricultural exports to Mexico grew from $4.1 billion in 1993 to $7.9 billion in 2003. Despite progress, some commodities still have difficulties gaining access to the Mexican market. GAO found that Mexico's use of antidumping, plant and animal health requirements, safeguards and other nontariff trade barriers, such as consumption taxes, presented the most significant market access issues for U.S. agricultural exports to Mexico. Mexico has put in place several programs to help farmers adjust to trade liberalization, but structural problems, such as lack of rural credit, continue to impede growth in rural areas, presenting challenges to full implementation of NAFTA. Lagging rural development fuels arguments that NAFTA has hurt small farmers, although studies, including some Mexican studies, do not support this conclusion. Opponents of NAFTA want to block further tariff eliminations and are demanding renegotiation of NAFTA's agricultural provisions. Concerned about such opposition, U.S. officials acknowledged the need to promote the benefits of NAFTA, while seeking ways to help Mexico address its rural development issues. Historically, U.S. agencies have undertaken many agriculture-related collaborative efforts with Mexico. Since 2001, U.S.-Mexico development activities have taken place under the Partnership for Prosperity (P4P) Initiative to promote development in parts of Mexico where economic growth has lagged. Recognizing the importance of rural development to the success of NAFTA, Department of State and USDA strategies for Mexico call for building on collaborative activities under the P4P to pursue the related goals of rural development and trade liberalization under NAFTA; however, the P4P action plans do not set forth specific strategies and activities that could be used to achieve these goals. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Federal regulation is one of the basic tools of government. Agencies issue thousands of rules and regulations each year to implement statutes enacted by Congress. The public policy goals and benefits of regulations include, among other things, ensuring that workplaces, air travel, foods, and drugs are safe; that the nation’s air, water, and land are not polluted; and that the appropriate amount of tax is collected. The costs of these regulations are estimated to be in the hundreds of billions of dollars, and the benefits estimates are much higher. Given the size and impact of federal regulation, Congresses and Presidents have taken a number of actions to refine and reform the regulatory process within the past 25 years. In September 1980, RFA was enacted in response to concerns about the effect that federal regulations can have on “small entities,” defined by the Act as including small businesses, small governmental jurisdictions, and certain small not-for-profit organizations. As we have previously noted, small businesses are a significant part of the nation’s economy, and small governments make up the vast majority of local governments in the United States. However, there have been concerns that these small entities may be disproportionately affected by federal agencies’ regulatory requirements. RFA established the principle that agencies should endeavor, consistent with the objectives of applicable statutes, to fit regulatory and informational requirements to the scale of these small entities. RFA requires regulatory agencies—including the independent regulatory agencies—to assess the potential impact of their rules on small entities. Under RFA, an agency must prepare an initial regulatory flexibility analysis at the time a proposed rule is issued unless the head of the agency determines that the proposed rule would not have a “significant economic impact upon a substantial number of small entities.” Further, agencies must consider alternatives to their proposed rules that will accomplish the agencies’ objectives while minimizing the impacts on small entities. The Act also requires agencies to ensure that small entities have an opportunity to participate in the rulemaking process and requires the Chief Counsel for Advocacy of the Small Business Administration (Office of Advocacy) to monitor agencies’ compliance. Among other things, RFA also requires regulatory agencies to review, within 10 years of promulgation, existing rules that have or will have a significant impact on small entities to determine whether they should be continued without change or amended or rescinded to minimize their impact on small entities. Congress amended RFA with the Small Business Regulatory Enforcement Fairness Act of 1996 (SBREFA). SBREFA made certain agency actions under RFA judicially reviewable. Other provisions in SBREFA added new requirements. For example, SBREFA requires agencies to develop one or more compliance guides for each final rule or group of related final rules for which the agency is required to prepare a regulatory flexibility analysis, and it requires agencies to provide small entities with some form of relief from civil monetary penalties. SBREFA also requires the Environmental Protection Agency (EPA) and the Occupational Safety and Health Administration to convene advocacy review panels before publishing an initial regulatory flexibility analysis. More recently, in August 2002, President George W. Bush issued Executive Order 13272, which requires federal agencies to establish written procedures and policies on how they would measure the impact of their regulatory proposals on small entities and to vet those policies with the Office of Advocacy. The order also requires agencies to notify the Office of Advocacy before publishing draft rules expected to have a significant small business impact, to consider its written comments on proposed rules, and to publish a response with the final rule. The order requires the Office of Advocacy to provide notification of the requirements of the Act and training to all agencies on how to comply with RFA. The Office of Advocacy published guidance on the Act in 2003 and reported training more than 20 agencies on RFA compliance in fiscal year 2005. In response to congressional requests, we have reviewed agencies’ implementation of RFA and related requirements on many occasions over the years, with topics ranging from specific statutory provisions to the overall implementation of RFA. Generally, we found that the Act’s overall results and effectiveness have been mixed. This is not unique to RFA; we found similar results when reviewing other regulatory reform initiatives, such as the Unfunded Mandates Reform Act of 1995. Our past reports illustrated both the promise and the problems associated with RFA. RFA and related requirements have clearly affected how federal agencies regulate, and we identified important benefits of these initiatives, such as increasing attention on the potential impacts of rules and raising expectations regarding the analytical support for proposed rules. However, a recurring theme in our findings was that uncertainties about RFA’s requirements and varying interpretations of those requirements by federal agencies limited the Act’s application and effectiveness. Some of the topics we reviewed, and our main findings regarding impediments to RFA’s implementation, are illustrated in the following examples: We examined 12 years of annual reports from the Office of Advocacy and concluded that the reports indicated variable compliance with RFA across agencies, within agencies, and over time—a conclusion that the Office of Advocacy also reached in subsequent reports on implementation of RFA (on the 20th and 25th anniversaries of RFA’s enactment). We noted that some agencies had been repeatedly characterized as satisfying RFA requirements, but other agencies were consistently viewed as recalcitrant. Agencies’ performance also varied over time or varied by offices within the agencies. We said that one reason for agencies’ lack of compliance with RFA requirements was that the Act did not expressly authorize the Small Business Administration (SBA) to interpret key provisions and did not require SBA to develop criteria for agencies to follow in reviewing their rules. We examined RFA implementation with regard to small governments and concluded that agencies were not conducting as many regulatory flexibility analyses for small governments as they might, largely because of weaknesses in the Act. Specifically, we found that each agency we reviewed had a different interpretation of key RFA provisions. We also pointed out that RFA allowed agencies to interpret whether their proposed rules affected small governments and did not provide sufficiently specific criteria or definitions to guide agencies in deciding whether and how to assess the impact of proposed rules on small governments. We reviewed implementation of small business advocacy review panel requirements under SBREFA and found that the panels that had been convened were generally well received. However, we also said that implementation was hindered—specifically, that there was uncertainty over whether panels should have been convened for some proposed rules—by the lack of agreed-upon governmentwide criteria as to whether a rule has a significant impact. We examined other related requirements regarding agencies’ policies for the reduction and/or waiver of civil penalties on small entities and the publication of small entity compliance guides. Again, we found that implementation varied across and within agencies, with some of the ineffectiveness and inconsistency traceable to definitional problems in RFA. All of the agencies’ penalty relief policies that we reviewed were within the discretion that Congress provided, but the policies varied considerably. Some policies covered only a portion of agencies’ civil penalty enforcement actions, and some provided small entities with no greater penalty relief than large entities. The agencies varied in how key terms were defined. Similarly, we concluded that the requirement for small entity compliance guides did not have much of an impact, and its implementation also varied across, and sometimes within, agencies. RFA is unique among statutory requirements with general applicability in having a provision, under section 610, for the periodic review of existing rules. However, it is not clear that this look-back provision in RFA has been consistently and effectively implemented. In a series of reports on agencies’ compliance with section 610, we found that the required reviews were not being conducted. Meetings with agencies to identify why compliance was so limited revealed significant differences of opinion regarding key terms in RFA and confusion about what was required to determine compliance with RFA. At the request of the House Committee on Energy and Commerce, we have begun new work examining the subject of regulatory agencies’ retrospective reviews of their existing regulations, including those undertaken in response to Section 610, and will report on the results of this engagement in the future. We have not yet examined the effect of Executive Order 13272 and the Office of Advocacy’s subsequent guidance and training for agencies on implementing RFA. Therefore, we have not done any evaluations that would indicate whether or not those developments are helping to address some of our concerns about the effectiveness of RFA. While RFA has helped to influence how agencies regulate small entities, we believe that the full promise of the Act has not been realized. The results from our past work suggest that the Subcommittee might wish to review the procedures, definitions, exemptions, and other provisions of RFA, and related statutory requirements, to determine whether changes are needed to better achieve the purposes Congress intended. The central theme of our prior findings and recommendations on RFA has been the need to revisit and clarify elements of the Act, particularly its key terms. Although more recent developments, such as the Office of Advocacy’s detailed guidance to agencies on RFA compliance, may help address some of these long-standing issues, current legislative proposals, such as H.R. 682, make it clear that concerns remain about RFA’s effectiveness—for example, that agencies are not assessing the impacts of their rules or identifying less costly regulatory approaches as expected under RFA—and the impact of federal regulations on small entities. Unclear terms and definitions can affect the applicability and effectiveness of regulatory reform requirements. We have frequently cited the need to clarify the key terms in RFA, particularly “significant economic impact on a substantial number of small entities.” RFA’s requirements do not apply if an agency head certifies that a rule will not have a “significant economic impact on a substantial number of small entities.” However, RFA neither defines this key phrase nor places clear responsibility on any party to define it consistently across the government. It is therefore not surprising, as I mentioned earlier, that we found compliance with RFA varied from one agency to another and that agencies had different interpretations of RFA’s requirements. We have recommended several times that Congress provide greater clarity concerning the key terms and provisions of RFA and related requirements, but to date Congress has not acted on many of these recommendations. The questions that remain unresolved on this topic are numerous and varied, including: Does Congress believe that the economic impact of a rule should be measured in terms of compliance costs as a percentage of businesses’ annual revenues, the percentage of work hours available to the firms, or other metrics? If so, what percentage or other measure would be an appropriate definition of “significant?” Should agencies take into account the cumulative impact of their rules on small entities, even within a particular program area? Should agencies count the impact of the underlying statutes when determining whether their rules have a significant impact? What should be considered a “rule” for purposes of the requirement in RFA that agencies review rules with a significant impact within 10 years of their promulgation? Should agencies review rules that had a significant impact at the time they were originally published, or only those that currently have that effect? Should agencies conduct regulatory flexibility analyses for rules that have a positive economic impact on small entities, or only for rules with a negative impact? It is worth noting that the Office of Advocacy’s 2003 RFA compliance guide, while reiterating that RFA does not define certain key terms, nevertheless provides some suggestions on the subject. Citing parts of RFA’s legislative history, the guidance indicates that exact standards for such definitions may not be possible or desirable, and that the definitions should vary depending on the context of each rule and preliminary assessments of the rule’s impact. For example, the guidance points out that “significance” can be seen as relative to the size of a business and its competitors, among other things. However, the guidance does identify factors that agencies might want to consider when making RFA determinations. In some ways, this mirrors other aspects of RFA, such as section 610, where Congress did not explicitly define a threshold for an agency to determine whether an existing regulation should be maintained, amended, or eliminated but rather identified the factors that an agency must consider in its reviews. We do not yet know whether or to what extent the guidance and associated training has helped agencies to clarify some of the long-standing confusion about RFA requirements and terms. Additional monitoring of RFA compliance may help to answer that question. Congress might also want to consider whether the factors that the Office of Advocacy suggested to help agencies define key terms and requirements are consistent with congressional intent or would benefit from having a statutory basis. I also want to point out the potential domino effect of agencies’ determinations of whether or not RFA applies to their rules. This is related to the lack of clarity on key terms mentioned above, the potential for agencies to waive or delay analysis under RFA, and the limitation of RFA’s applicability to only rules for which there was a notice of proposed rulemaking. The impact of an agency head’s determination that RFA is not applicable is not only that the initial and final regulatory flexibility analyses envisioned by the Act would not be done, but also that other related requirements would not apply. These requirements include, for example, the need for agencies to prepare small entity compliance guides, convene SBREFA advocacy panels, and conduct periodic reviews of certain existing regulations. While we recognize, as provided by the Administrative Procedure Act, that notices of proposed rulemaking are not always practical, necessary, or in the public interest, this still raises the question of whether such exemptions from notice and comment rulemaking should preclude future opportunities for public participation and other related procedural and analytical requirements. Our prior work has shown that substantial numbers of rules, including major rules (for example, those with an impact of $100 million or more), are promulgated without going through a notice of proposed rulemaking. We also believe it is important for Congress to reexamine, not just RFA, but how all of the various regulatory reform initiatives fit together and influence agencies’ regulatory actions. As I previously testified before this Subcommittee, we have found the effectiveness of most regulatory reform initiatives to be limited and that they merit congressional attention. In addition, we have stated that this is a particularly timely point to reexamine the federal regulatory framework, because significant trends and challenges establish the case for change and the need to reexamine the base of federal government and all of its existing programs, policies, functions, and activities. Our September 2000 report on EPA’s implementation of RFA illustrated the importance of considering the bigger picture and interrelationships between regulatory reform initiatives. On the one hand, we reported about concerns regarding the methodologies EPA used in its analyses and its conclusions about the impact on small businesses of a proposed rule to lower certain reporting thresholds for lead and lead compounds. The bigger picture, though, was our finding that after SBREFA took effect EPA’s four major program offices certified that almost all (96 percent) of their proposed rules would not have a significant impact on a substantial number of small entities. EPA officials told us this was because of a change in EPA’s RFA guidance prompted by the SBREFA requirement to convene an advocacy review panel for any proposed rule that was not certified. Prior to SBREFA, EPA’s policy was to prepare a regulatory flexibility analysis for any rule that the agency expected to have any impact on small entities. According to EPA officials, the SBREFA panel requirement made continuation of the agency’s more inclusive RFA policy too costly and impractical. In other words, a statute Congress enacted to strengthen RFA caused the agency to use the discretion permitted in RFA to conduct fewer regulatory flexibility analyses. In closing, I would reiterate that we believe Congress should revisit aspects of RFA and that our prior reports have indicated ample opportunities to refine the Act. Despite some progress in implementing RFA and other regulatory reform initiatives since 1980, it is clear from the introduction of H.R. 682 and related bills that Members of Congress remain concerned about the impact of regulations on small entities and the extent to which the rulemaking process encourages agencies to consider ways to reduce the burdens of new and existing rules, while still achieving the objectives of the underlying statutes. Mr. Chairman, this concludes my prepared statement. Once again, I appreciate the opportunity to testify on these important issues. I would be pleased to address any questions you or other Members of the Subcommittee might have at this time. If additional information is needed regarding this testimony, please contact J. Christopher Mihm, Managing Director, Strategic Issues, on (202) 512-6806 or at [email protected]. Tim Bober, Jason Dorn, Andrea Levine, Latesha Love, Joseph Santiago, and Michael Volpe contributed to this statement. Federal Rulemaking: Past Reviews and Emerging Trends Suggest Issues That Merit Congressional Attention. GAO-06-228T. Washington, D.C.: November 1, 2005. Regulatory Reform: Prior Reviews of Federal Regulatory Process Initiatives Reveal Opportunities for Improvements. GAO-05-939T. Washington, D.C.: July 27, 2005. Regulatory Flexibility Act: Clarification of Key Terms Still Needed. GAO-02-491T. Washington, D.C.: March 6, 2002. Regulatory Reform: Compliance Guide Requirement Has Had Little Effect on Agency Practices. GAO-02-172. Washington, D.C.: December 28, 2001. Federal Rulemaking: Procedural and Analytical Requirements at OSHA and Other Agencies. GAO-01-852T. Washington, D.C.: June 14, 2001. Regulatory Flexibility Act: Key Terms Still Need to Be Clarified. GAO-01- 669T. Washington, D.C.: April 24, 2001. Regulatory Reform: Implementation of Selected Agencies’ Civil Penalty Relief Policies for Small Entities. GAO-01-280. Washington, D.C.: February 20, 2001. Regulatory Flexibility Act: Implementation in EPA Program Offices and Proposed Lead Rule. GAO/GGD-00-193. Washington, D.C.: September 20, 2000. Regulatory Reform: Procedural and Analytical Requirements in Federal Rulemaking. GAO/T-GGD/OGC-00-157. Washington, D.C.: June 8, 2000. Regulatory Flexibility Act: Agencies’ Interpretations of Review Requirements Vary. GAO/GGD-99-55. Washington, D.C.: April 2, 1999. Federal Rulemaking: Agencies Often Published Final Actions Without Proposed Rules. GAO/GGD-98-126. Washington, D.C.: August 31, 1998. Regulatory Reform: Implementation of the Small Business Advocacy Review Panel Requirements. GAO/GGD-98-36. Washington, D.C.: March 18, 1998. Regulatory Reform: Agencies’ Section 610 Review Notices Often Did Not Meet Statutory Requirements. GAO/T-GGD-98-64. Washington, D.C.: February 12, 1998. Regulatory Flexibility Act: Agencies’ Use of the October 1997 Unified Agenda Often Did Not Satisfy Notification Requirements. GAO/GGD-98- 61R. Washington, D.C.: February 12, 1998. Regulatory Flexibility Act: Agencies’ Use of the November 1996 Unified Agenda Did Not Satisfy Notification Requirements. GAO/GGD/OGC-97- 77R. Washington, D.C.: April 22, 1997. Regulatory Flexibility Act: Status of Agencies’ Compliance. GAO/GGD- 94-105. Washington, D.C.: April 27, 1994. Regulatory Flexibility Act: Inherent Weaknesses May Limit Its Usefulness for Small Governments. GAO/HRD-91-16. Washington, D.C.: January 11, 1991. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Federal regulation is one of the basic tools of government used to implement public policy. In 1980, the Regulatory Flexibility Act (RFA) was enacted in response to concerns about the effect that regulations can have on small entities, including small businesses, small governmental jurisdictions, and certain small not-for-profit organizations. Congress amended RFA in 1996, and the President issued Executive Order 13272 in 2002, to strengthen requirements for agencies to consider the impact of their proposed rules on small entities. However, concerns about the regulatory burden on small entities persist, prompting legislative proposals such as H.R. 682, the Regulatory Flexibility Improvements Act, which would amend RFA. At the request of Congress, GAO has prepared many reports and testimonies reviewing the implementation of RFA and related policies. On the basis of that body of work, this testimony (1) provides an overview of the basic purpose and requirements of RFA, (2) highlights the main impediments to the Act's implementation that GAO's reports identified, and (3) suggests elements of RFA that Congress might consider amending to improve the effectiveness of the Act. GAO's prior reports and testimonies contain recommendations to improve the implementation of RFA and related regulatory process requirements. RFA established a principle that agencies should endeavor to fit their regulatory requirements to the scale of small entities. Among other things, RFA requires regulatory agencies to assess the impact of proposed rules on small entities, consider regulatory alternatives that will accomplish the agencies' objectives while minimizing the impacts on small entities, and ensure that small entities have an opportunity to participate in the rulemaking process. Further, RFA requires agencies to review existing rules within 10 years of promulgation that have or will have a significant impact on small entities to determine whether they should be continued without change or amended or rescinded to minimize their impact on small entities. RFA also requires the Chief Counsel for Advocacy of the Small Business Administration (Office of Advocacy) to monitor agencies' compliance. In response to Executive Order 13272, the Office of Advocacy published guidance in 2003 on how to comply with RFA. In response to congressional requests, GAO reviewed agencies' implementation of RFA and related requirements on many occasions, with topics ranging from specific statutory provisions to the overall implementation of RFA. Generally, GAO found that the Act's results and effectiveness have been mixed; its reports illustrated both the promise and the problems associated with RFA. On one hand, RFA and related requirements clearly affected how federal agencies regulate and produced benefits, such as raising expectations regarding the analytical support for proposed rules. However, GAO also found that compliance with RFA varied across agencies, within agencies, and over time. A recurring finding was that uncertainties about RFA's requirements and key terms, and varying interpretations by federal agencies, limited the Act's application and effectiveness. GAO's past work suggests that Congress might wish to review the procedures, definitions, exemptions, and other provisions of RFA to determine whether changes are needed to better achieve the purposes Congress intended. In particular, GAO's reports indicate that the full promise of RFA may never be realized until Congress revisits and clarifies elements of the Act, especially its key terms, or provides an agency or office with the clear authority and responsibility to do so. Attention should also be paid to the domino effect that an agency's initial determination of whether RFA is applicable to a rulemaking has on other statutory requirements, such as preparing compliance guides for small entities and periodically reviewing existing regulations. GAO also believes that Congress should reexamine not just RFA but how all of the various regulatory reform initiatives fit together and influence agencies' regulatory actions. Recent developments, such as the Office of Advocacy's RFA guidance, may help address some of these long-standing issues and merit continued monitoring by Congress |
You are an expert at summarizing long articles. Proceed to summarize the following text:
For several years we have reported that DOD faces a range of financial management and related business process challenges that are complex, long-standing, pervasive, and deeply rooted in virtually all business operations throughout the department. As the Comptroller General recently testified and as discussed in our latest financial audit report, DOD’s financial management deficiencies, taken together, continue to represent the single largest obstacle to achieving an unqualified opinion on the U.S. government’s consolidated financial statements. To date, none of the military services has passed the test of an independent financial audit because of pervasive weaknesses in internal control and processes and fundamentally flawed business systems. In identifying improved financial performance as one of its five governmentwide initiatives, the President’s Management Agenda recognized that obtaining a clean (unqualified) financial audit opinion is a basic prescription for any well-managed organization. At the same time, it recognized that without sound internal control and accurate and timely financial and performance information, it is not possible to accomplish the President’s agenda and secure the best performance and highest measure of accountability for the American people. The Joint Financial Management Improvement Program (JFMIP) principals have defined certain measures, in addition to receiving an unqualified financial statement audit opinion, for achieving financial management success. These additional measures include (1) being able to routinely provide timely, accurate, and useful financial and performance information, (2) having no material internal control weaknesses or material noncompliance with laws and regulations, and (3) meeting the requirements of the Federal Financial Management Improvement Act of 1996 (FFMIA). Unfortunately, DOD does not meet any of these conditions. For example, for fiscal year 2003, the DOD Inspector General (DOD IG) issued a disclaimer of opinion on DOD’s financial statements, citing 11 material weaknesses in internal control and noncompliance with FFMIA requirements. Recent audits and investigations by GAO and DOD auditors continue to confirm the existence of pervasive weaknesses in DOD’s financial management and related business processes and systems. These problems have (1) resulted in a lack of reliable information needed to make sound decisions and report on the status of DOD activities, including accountability of assets, through financial and other reports to Congress and DOD decision makers, (2) hindered its operational efficiency, (3) adversely affected mission performance, and (4) left the department vulnerable to fraud, waste, and abuse, as the following examples illustrate. Four hundred and fifty of the 481 mobilized Army National Guard soldiers from six GAO case study Special Forces and Military Police units had at least one pay problem associated with their mobilization. DOD’s inability to provide timely and accurate payments to these soldiers, many of whom risked their lives in recent Iraq or Afghanistan missions, distracted them from their missions, imposed financial hardships on the soldiers and their families, and has had a negative impact on retention. (GAO-04-89, Nov. 13, 2003) DOD incurred substantial logistical support problems as a result of weak distribution and accountability processes and controls over supplies and equipment shipments in support of Operation Iraqi Freedom activities, similar to those encountered during the prior gulf war. These weaknesses resulted in (1) supply shortages, (2) backlogs of materials delivered in theater but not delivered to the requesting activity, (3) a discrepancy of $1.2 billion between the amount of materiel shipped and that acknowledged by the activity as received, (4) cannibalization of vehicles, and (5) duplicate supply requisitions. (GAO-04-305R, Dec. 18, 2003) Inadequate asset visibility and accountability resulted in DOD selling new Joint Service Lightweight Integrated Suit Technology (JSLIST)—the current chemical and biological protective garment used by our military forces—on the internet for $3 each (coat and trousers) while at the same time buying them for over $200 each. DOD has acknowledged that these garments should have been restricted to DOD use only and therefore should not have been available to the public. (GAO-02-873T, June 25, 2002) Inadequate asset accountability also resulted in DOD’s inability to locate and remove over 250,000 defective Battle Dress Overgarments (BDOs)— the predecessor of JSLIST—from its inventory. Subsequently, we found that DOD had sold many of these defective suits to the public, including 379 that we purchased in an undercover operation. In addition, DOD may have issued over 4,700 of the defective BDO suits to local law enforcement agencies. Although local law enforcement agencies are most likely to be the first responders to a terrorist attack, DOD failed to inform these agencies that using these BDO suits could result in death or serious injury. (GAO-04-15NI, Nov. 19, 2003) Tens of millions of dollars are not being collected each year by military treatment facilities from third-party insurers because key information required to effectively bill and collect from third-party insurers is often not properly collected, recorded, or used by the military treatment facilities. (GAO-04-322R, Feb. 20, 2004) Our analysis of data on more than 50,000 maintenance work orders opened during the deployments of six battle groups indicated that about 29,000 orders (58 percent) could not be completed because the needed repair parts were not available on board ship. This condition was a result of inaccurate ship configuration records and incomplete, outdated, or erroneous historical parts demand data. Such problems not only have a detrimental impact on mission readiness, they may also increase operational costs due to delays in repairing equipment and holding unneeded spare parts inventory. (GAO-03-887, Aug. 29, 2003) DOD sold excess biological laboratory equipment, including a biological safety cabinet, a bacteriological incubator, a centrifuge, and other items that could be used to produce biological warfare agents. Using a fictitious company and fictitious individual identities, we were able to purchase a large number of new and usable equipment items over the Internet from DOD. Although the production of biological warfare agents requires a high degree of expertise, the ease with which these items were obtained through public sales increases the risk that terrorists could obtain and use them to produce biological agents that could be used against the United States. (GAO-04-81TNI, Oct. 7, 2003) Based on statistical sampling, we estimated that 72 percent of the over 68,000 premium class airline tickets DOD purchased for fiscal years 2001 and 2002 was not properly authorized and that 73 percent was not properly justified. During fiscal years 2001 and 2002, DOD spent almost $124 million on premium class tickets that included at least one leg in premium class—usually business class. Because each premium class ticket cost the government up to thousands of dollars more than a coach class ticket, unauthorized premium class travel resulted in millions of dollars of unnecessary costs being incurred annually. (GAO-04-229T, Nov. 6, 2003) Some DOD contractors have been abusing the federal tax system with little or no consequence, and DOD is not collecting as much in unpaid taxes as it could. Under the Debt Collection Improvement Act of 1996, DOD is responsible—working with the Treasury Department—for offsetting payments made to contractors to collect funds owed, such as unpaid federal taxes. However, we found that DOD had collected only $687,000 of unpaid taxes as of September 2003. We estimated that at least $100 million could be collected annually from DOD contractors through effective implementation of levy and debt collection programs. (GAO-04- 95, Feb. 12, 2004) Our review of fiscal year 2002 data revealed that about $1 of every $4 in contract payment transactions in DOD’s Mechanization of Contract Administration Services (MOCAS) system was for adjustments to previously recorded payments—$49 billion of adjustments out of $198 billion in disbursement, collection, and adjustment transactions. According to DOD, the cost of researching and making adjustments to accounting records was about $34 million in fiscal year 2002, primarily to pay hundreds of DOD and contractor staff. (GAO-03-727, Aug. 8, 2003) DOD’s information technology (IT) budget submission to Congress for fiscal year 2004 contained material inconsistencies, inaccuracies, or omissions that limited its reliability. For example, we identified discrepancies totaling about $1.6 billion between two primary parts of the submission—the IT budget summary report and the detailed Capital Investments Reports on each IT initiative. These problems were largely attributable to insufficient management attention and limitations in departmental policies and procedures, such as guidance in DOD’s Financial Management Regulation, and to shortcomings in systems that support budget-related activities. (GAO-04-115, Dec. 19, 2003) Since the mid 1980s, we have reported that DOD uses overly optimistic planning assumptions to estimate its annual budget request. These same assumptions are reflected in its Future Years Defense Program, which reports projected spending for the current budget year and at least 4 succeeding years. In addition, in February 2004 the Congressional Budget Office projected that DOD’s demand for resources could grow to about $490 billion in fiscal year 2009. DOD’s own estimate for that same year was only $439 billion. As a result of DOD’s continuing use of optimistic assumptions, DOD has too many programs for the available dollars, which often leads to program instability, costly program stretch-outs, and program termination. Over the past few years, the mismatch between programs and budgets has continued, particularly in the area of weapons systems acquisition. For example, in January 2003, we reported that the estimated costs of developing eight major weapons systems had increased from about $47 billion in fiscal year 1998 to about $72 billion by fiscal year 2003. (GAO-03-98, January 2003) These examples clearly demonstrate not only the severity of DOD’s current problems, but also the importance of business systems modernization as a critical element in the department’s transformation efforts to improve the economy, efficiency, and effectiveness of it’s operations, and to provide for transparency and accountability to Congress and American taxpayers. Since May 1997, we have highlighted in various testimonies and reports what we believe are the underlying causes of the department’s inability to resolve its long-standing financial management and related business management weaknesses and fundamentally reform its business operations. We found that one or more of these causes were contributing factors to the financial management and related business process weaknesses we just described. Over the years, the department has undertaken many initiatives intended to transform its business operations departmentwide and improve the reliability of information for decision making and reporting but has not had much success because it has not addressed the following four underlying causes: a lack of sustained top-level leadership and management accountability for deeply embedded cultural resistance to change, including military service parochialism and stovepiped operations; a lack of results-oriented goals and performance measures and monitoring; and inadequate incentives and accountability mechanisms relating to business transformation efforts. If not properly addressed, these root causes will likely result in the failure of current DOD initiatives. DOD has not routinely assigned accountability for performance to specific organizations or individuals who have sufficient authority to accomplish desired goals. For example, under the Chief Financial Officers Act of 1990, it is the responsibility of the agency Chief Financial Officer (CFO) to establish the mission and vision for the agency’s future financial management and to direct, manage, and provide oversight of financial management operations. However, at DOD, the Comptroller—who is by statute the department’s CFO—has direct responsibility for only an estimated 20 percent of the data relied on to carry out the department’s financial management operations. The other 80 percent comes from DOD’s other business operations and is under the control and authority of other DOD officials. In addition, DOD’s past experience has suggested that top management has not had a proactive, consistent, and continuing role in integrating daily operations for achieving business transformation related performance goals. It is imperative that major improvement initiatives have the direct, active support and involvement of the Secretary and Deputy Secretary of Defense to ensure that daily activities throughout the department remain focused on achieving shared, agencywide outcomes and success. While the current DOD leadership, such as the Secretary, Deputy Secretary, and Comptroller, have certainly demonstrated their commitment to reforming the department, the magnitude and nature of day-to-day demands placed on these leaders following the events of September 11, 2001, clearly affect the level of oversight and involvement in business transformation efforts that these leaders can sustain. Given the importance of DOD’s business transformation effort, it is imperative that it receive the sustained leadership needed to improve the economy, efficiency, and effectiveness of DOD’s business operations. Based on our surveys of best practices of world-class organizations, strong executive CFO and Chief Information Officer (CIO) leadership and centralized control over systems investments are essential to (1) making financial management an entitywide priority, (2) providing meaningful information to decision makers, (3) building a team of people that delivers results, and (4) effectively leveraging technology to achieve stated goals and objectives. Cultural resistance to change, military service parochialism, and stovepiped operations have all contributed significantly to the failure of previous attempts to implement broad-based management reforms at DOD. The department has acknowledged that it confronts decades-old problems deeply grounded in the bureaucratic history and operating practices of a complex, multifaceted organization. Recent audits reveal that DOD has made only small inroads in addressing these challenges. For example, the Bob Stump National Defense Authorization Act for Fiscal Year 2003 requires the DOD Comptroller to determine that each financial system improvement meets the specific conditions called for in the act before DOD obligates funds in amounts exceeding $1 million. However, we found that most system improvement efforts involving obligations over $1 million were not reviewed by the DOD Comptroller for the purpose of making that determination and that DOD continued to lack a mechanism for proactively identifying system improvement initiatives. We asked for, but DOD did not provide, comprehensive data for obligations in excess of $1 million for business system modernization. Based on a comparison of the limited information available for fiscal years 2003 and 2004, we identified $479 million in reported obligations by the military services that were not submitted to the DOD Comptroller for review. In addition, in September 2003, we reported that DOD continued to use a stovepiped approach to develop and fund its business system investments. Specifically, we found that DOD components receive and control funding for business systems investments without being subject to the scrutiny of the DOD Comptroller. DOD’s ability to address its current “business-as- usual” approach to business system investments is further hampered by its lack of (1) a complete inventory of business systems (a condition we first highlighted in 1998), (2) a standard definition of what constitutes a business system, (3) a well-defined enterprise architecture, and (4) an effective approach for the control and accountability over business system investments. Until DOD develops and implements an effective strategy for overcoming resistance, parochialism, and stovepiped operations, its transformation efforts will not be successful. A key element of any major program is its ability to establish clearly defined goals and performance measures to monitor and report its progress to management. However, DOD has not yet established measurable, results-oriented goals to evaluate BMMP’s cost, schedule and performance outcomes and results, or explicitly defined performance measures to evaluate the architecture quality, content, and utility of subsequent major updates to its initial business enterprise architecture (BEA). For example, in our September 2003 report, we stated that DOD had not defined specific plans outlining how it intends to extend and evolve the initial BEA to include the missing scope and details that we identified. Instead, DOD’s primary BEA goal was to complete as much of the architecture as it could within a set period of time. According to DOD, it intends to refine the initial BEA through at least six different major updates of its architecture between February 2004 and the second quarter of 2005. However, it remains unclear what these major updates will individually or collectively provide and how they contribute to achieving DOD’s goals. In its March 15, 2004, progress report to defense congressional committees on the status of BMMP’s business transformation efforts, DOD reported that it plans to establish an initial approved program baseline to evaluate the cost, schedule, and performance of the BMMP. Given that DOD has reported disbursements of $111 million since development efforts began in fiscal year 2002, it is critical that it establish meaningful, tangible, and measurable program goals and objectives—short-term and long-term. Until DOD develops and implements clearly defined results-oriented goals for the overall program, including the architecture content of each major update of its architecture, the department will continue to lack a clear measure of the BMMP’s progress in transforming the department’s business operations and in providing the Congress reasonable assurance that funds are being directed towards resolving the department’s long- standing business operational problems. The final underlying cause of the department’s long-standing inability to carry out needed fundamental reform has been the lack of incentives for making more than incremental change to existing “business-as-usual” operations, systems, and organizational structures. Traditionally, DOD has focused on justifying its need for more funding rather than on the outcomes its programs have produced. DOD has historically measured its performance by resource components such as the amount of money spent, people employed, or number of tasks completed. Incentives for its decision makers to implement changed behavior have been minimal or nonexistent. The lack of incentive to change is evident in the business systems modernization area. Despite DOD’s acknowledgement that many of its systems are error prone, duplicative, and stovepiped, DOD continues to allow its component organizations to make their own investments independently of one another and implement different system solutions to solve the same business problems. These stovepiped decision-making processes have contributed to the department’s current complex, error- prone environment. The DOD Comptroller recently testified that DOD’s actual systems inventory could be twice as many as the number of systems the department currently recognizes as its systems inventory. In March 2003, we reported that ineffective program management and oversight, as well as a lack of accountability, resulted in DOD continuing to invest hundreds of millions of dollars in system modernization efforts without any assurance that the projects will produce operational improvements commensurate with the amount invested. For example, the estimated cost of one of the business system investment projects that we reviewed increased by as much as $274 million, while its schedule slipped by almost 4 years. After spending $126 million, DOD terminated that project in December 2002, citing poor performance and increasing costs. GAO and the DOD IG have identified numerous business system modernization efforts that are not economically justified on the basis of cost, benefits and risk; take years longer than planned; and fall short of delivering planned or needed capabilities. Despite this track record, DOD continues to increase spending on business systems while at the same time it lacks the effective management and oversight needed to achieve real results. Without appropriate incentives to improve their project management, ongoing oversight, and adequate accountability mechanisms, DOD components will continue to develop duplicative and nonintegrated systems that are inconsistent with the Secretary’s vision for reform. To effect real change, actions are needed to (1) break down parochialism and reward behaviors that meet DOD-wide goals, (2) develop incentives that motivate decision makers to initiate and implement efforts that are consistent with better program outcomes, including saying “no” or pulling the plug early on a system or program that is failing, and (3) facilitate a congressional focus on results-oriented management, particularly with respect to resource allocation decisions. As we have previously reported, and the success of the more narrowly defined DOD initiatives we will discuss later illustrate, the following key elements collectively will enable the department to effectively address the underlying causes of its inability to resolve its long-standing financial and business management problems. These elements are addressing the department’s financial management and related business operational challenges as part of a comprehensive, integrated, DOD-wide strategic plan for business reform; providing for sustained and committed leadership by top management, including but not limited to the Secretary of Defense; establishing resource control over business systems investments; establishing clear lines of responsibility, authority, and accountability; incorporating results-oriented performance measures and monitoring progress tied to key financial and business transformation objectives; providing appropriate incentives or consequences for action or inaction; establishing an enterprise architecture to guide and direct business systems modernization investments; and ensuring effective oversight and monitoring. These elements, which should not be viewed as independent actions but rather as a set of interrelated and interdependent actions, are reflected in the recommendations we have made to DOD and are consistent with those actions discussed in the department’s April 2001 financial management transformation report. The degree to which DOD incorporates them into its current reform efforts—both long and short term—will be a deciding factor in whether these efforts are successful. Thus far, the department’s progress in implementing our recommendations has been slow. Over the years, we have given DOD credit for beginning numerous initiatives intended to improve its business operations. Unfortunately, most of these initiatives failed to achieve their intended objective in part, we believe, because they failed to incorporate key elements that in our experience are critical to successful reform. Today, we would like to discuss one very important broad-based initiative, the BMMP, DOD currently has underway that, if properly developed and implemented, will result in significant improvements in DOD’s business operations. Within the next few months we intend to issue a report on the status of DOD’s efforts to refine and implement its enterprise architecture and the results of our review of two on going DOD system initiatives. In addition to the BMMP, DOD has undertaken several interim initiatives in recent years that have resulted in tangible, although limited, improvements. We believe that these tangible improvements were possible because DOD has accepted our recommendations and incorporated many of the key elements critical for reform. Furthermore, we would like to offer two suggestions for legislative consideration that we believe could significantly increase the likelihood of a successful business transformation effort at DOD. The BMMP, which the department established in July 2001 following our recommendation that DOD develop and implement an enterprise architecture, is vital to the department’s efforts to transform its business operations. The purpose of the BMMP is to oversee development and implementation of a departmentwide BEA, transition plan, and related efforts to ensure that DOD business system investments are consistent with the architecture. A well-defined and properly implemented BEA can provide assurance that the department invests in integrated enterprisewide business solutions and, conversely, can help move resources away from nonintegrated business system development efforts. As we reported in July 2003, DOD had developed an initial version of its departmentwide architecture for modernizing its current financial and business operations and systems and had expended tremendous effort and resources in doing so. However, substantial work remains before the architecture will be sufficiently detailed and the means for implementing it will be adequately established to begin to have a tangible impact on improving DOD’s overall business operations. We cannot overemphasize the degree of difficulty DOD faces in developing and implementing a well- defined architecture to provide the foundation that will guide its overall business transformation effort. On the positive side, during its initial efforts to develop the architecture, the department established some of the architecture management capabilities advocated by best practices and federal guidance, such as establishing a program office, designating a chief architect, and using an architecture development methodology and automated tool. Further, DOD’s initial version of its business enterprise architecture provided a foundation on which to build and ultimately produce a well-defined business enterprise architecture. For example, in September 2003, we reported that the “To Be” descriptions address, to at least some degree, how DOD intends to operate in the future, what information will be needed to support these future operations, and what technology standards should govern the design of future systems. While some progress has been made, DOD has not yet taken important steps that are critical to its ability to successfully use the enterprise architecture to drive reform throughout the department’s overall business operations. For example, DOD has not yet defined and implemented the following. Detailed plans to extend and evolve its initial architecture to include the missing scope and detail required by the Bob Stump National Defense Authorization Act for Fiscal Year 2003 and other relevant architectural requirements. Specifically, (1) the initial version of the BEA excluded some relevant external requirements, such as requirements for recording revenue, and lacked or provided little descriptive content pertaining to its “As Is” and “To Be” environments and (2) DOD had not yet developed the transition plan needed to provide a temporal road map for moving from the “As Is” to the “To Be” environment. An effective approach to select and control business system investments for obligations exceeding $1 million. As we previously stated, and it bears repeating here, DOD components currently receive direct funding for their business systems and continue to make their own parochial decisions regarding those investments without having received the scrutiny of the DOD Comptroller as required by the Bob Stump National Defense Authorization Act for Fiscal Year of 2003. Later, we will offer a suggestion for improving the management and oversight of the billions of dollars DOD invests annually in business systems. DOD invests billions of dollars annually to operate, maintain, and modernize its business systems. For fiscal year 2004, the department requested approximately $28 billion in IT funding to support a wide range of military operations as well as DOD business systems operations, of which approximately $18.8 billion—$5.8 billion for business systems and $13 billion for business systems infrastructure—relates to the operation, maintenance, and modernization of the department’s reported thousands of business systems. The $18.8 billion is spread across the military services and defense agencies, with each receiving its own funding for IT investments. However, as we reported, DOD lacked an efficient and effective process for managing, developing, and implementing its business systems. These long-standing problems continue despite the significant investments in business systems by DOD components each year. For example, in March 2003 we reported that DOD’s oversight of four DFAS projects we reviewed had been ineffective. Investment management responsibility for the four projects rested with the Defense Finance and Accounting Service (DFAS), the DOD Comptroller, and the DOD CIO. In discharging this responsibility, each had allowed project investments to continue year after year, even through the projects had been marked by cost increases, schedule slippages, and capability changes. As a result DOD had invested approximately $316 million in four DFAS system modernization projects without demonstrating that this substantial investment would markedly improve DOD financial management information for decision making and financial reporting purposes. Specifically, we found that four DFAS projects reviewed lacked an approved economic analysis that reflected the fact that expected project costs had increased, while in some cases the benefits had decreased. For instance as we previously stated, the estimated cost of one project— referred to as the Defense Procurement Payment System (DPPS)— had increased by as much as $274 million, while its schedule slipped by almost 4 years. Such project analyses provide the requisite justification for decision makers to use in determining whether to invest additional resources in anticipation of receiving commensurate benefits and mission value. For each of the four projects we reviewed we found that DOD oversight entities—DFAS, the DOD Comptroller, and the DOD CIO—did not question the impact of the cost increases and schedule delays, and allowed the projects to proceed in the absence of the requisite analytical justification. Furthermore, in one case, they allowed a project estimated to cost $270 million, referred to as the DFAS Corporate Database/DFAS Corporate Warehouse (DCD/DCW), to proceed without an economic analysis. In another case, they allowed DPPS to continue despite known concerns about the validity of the project’s economic analysis. DOD subsequently terminated two—DPPS and the Defense Standard Disbursing System (DSDS)—of the four DFAS system modernization projects reviewed. As we previous mentioned, DPPS was terminated due to poor program performance and increasing costs after 7 years of effort and an investment of over $126 million. DFAS terminated DSDS after approximately 7 years of effort and an investment of about $53 million, noting that a valid business case for continuing the effort could not be made. These two terminated projects were planned to provide DOD the capability to address some of DOD’s long-standing contract and vendor payment problems. In addition to project management issues that continue to result in systems that do not perform as expected and cost more than planned, we found that DOD continues to lack a complete and reliable inventory of its current systems. In September 2003, we reported that DOD had created a repository of information about its existing systems inventory of approximately 2,300 business systems (up from 1,731 in October 2002) as part of its ongoing business systems modernization program, and consistent with our past recommendation. Due to its lack of visibility over systems departmentwide, DOD had to rely upon data calls to obtain its information. Unfortunately, due to its lack of an effective methodology and process for identifying business systems, including a clear definition of what constitutes a business system, DOD continues to lack assurance that its systems inventory is reliable and complete. In fact, the DOD Comptroller testified last week before the Senate Armed Services Subcommittee on Readiness and Management Support that the size of DOD’s actual systems inventory could be twice the size currently reported. This lack of visibility over current business systems in use throughout the department hinders DOD’s ability to identify and eliminate duplicate and nonintegrated systems and transition to its planned systems environment in an efficient and effective manner. Of the 2,274 business systems recorded in DOD’s systems inventory repository, the department reportedly has 665 systems to support human resource management, 565 systems to support logistical functions, 542 systems to perform finance and accounting functions, and 210 systems to support strategic planning and budget formulation. Table 1, which presents the composition of DOD business systems by functional area, reveals the numerous and redundant systems operating in the department today. As we have previously reported, these numerous systems have evolved into the overly complex and error-prone operation that exists today, including (1) little standardization across DOD components, (2) multiple systems performing the same tasks, (3) the same data stored in multiple systems, (4) manual data entry into multiple systems, and (5) a large number of data translations and interfaces that combine to exacerbate problems with data integrity. The department has recognized the uncontrolled proliferation of systems and the need to eliminate as many systems as possible and integrate and standardize those that remain. In fact, the two terminated DFAS projects were intended to reduce the number of systems or eliminate a portion of different systems that perform the same function. For example, DPPS was intended to consolidate eight contract and vendor pay systems and DSDS was intended to eliminate four different disbursing systems. Until DOD completes its efforts to refine and implement its enterprise architecture and transition plan, and develop and implement an effective approach for selecting and controlling business system investments, DOD will continue to lack (1) a comprehensive and integrated strategy to guide its business process and system changes, and (2) results-oriented measures to monitor and measure progress, including whether system development and modernization investment projects adequately incorporate leading practices used by the private sector and federal requirements and achieve performance and efficiency commensurate with the cost. These elements are critical to the success of DOD’s BMMP. Developing and implementing a BEA for an organization as large and complex as DOD is a formidable challenge, but it is critical to effecting the change required to achieve the Secretary’s vision of relevant, reliable, and timely financial and other management information to support the department’s vast operations. As mandated, we plan to continue to report on DOD’s progress in developing the next version of its architecture, developing its transition plan, validating its “As Is” systems inventory, and controlling its system investments. Since DOD’s overall business process transformation is a long-term effort, in the interim it is important for the department to focus on improvements that can be made using, or requiring only minor changes to, existing automated systems and processes. As demonstrated by the examples we will highlight in this testimony, leadership, real incentives, accountability, and oversight and monitoring—key elements to successful reform—have brought about improvements in some DOD operations, such as more timely commercial payments, reduced payment recording errors, and significant reductions in individually billed travel card delinquency rates. To help achieve the department’s goal of improved financial information, the DOD Comptroller has developed a Financial Management Balanced Scorecard that is intended to align the financial community’s strategy, goals, objectives, and related performance measures with the departmentwide risk management framework established as part of DOD’s Quadrennial Defense Review, and with the President’s Management Agenda. To effectively implement the balanced scorecard, the Comptroller is planning to cascade the performance measures down to the military services and defense agency financial communities, along with certain specific reporting requirements. DOD has also developed a Web site where implementation information and monthly indicator updates will be made available for the financial communities’ review. At the departmentwide level, certain financial metrics will be selected, consolidated, and reported to the top levels of DOD management for evaluation and comparison. These “dashboard” metrics are intended to provide key decision makers, including Congress, with critical performance information at a glance, in a consistent and easily understandable format. DFAS has been reporting the metrics cited below for several years, which, under the leadership of DFAS’ Director and DOD’s Comptroller, have reported improvements, including the following. From April 2001 to January 2004, DOD reduced its commercial pay backlogs (payment delinquencies) by 55 percent. From March 2001 to December 2003, DOD reduced its payment recording errors by 33 percent. The delinquency rate for individually billed travel cards dropped from 18.4 percent in January 2001 to 10.7 percent in January 2004. Using DFAS’ metrics, management can quickly see when and where problems are arising and can focus additional attention on those areas. While these metrics show significant improvements from 2001 to today, statistics for the last few months show that progress has slowed or even taken a few steps backward for payment recording errors and commercial pay backlogs. Our report last year on DOD’s metrics program included a caution that, without modern integrated systems and the streamlined processes they engender, reported progress may not be sustainable if workload is increased. Since we reported problems with DOD’s purchase card program, DOD and the military services have taken actions to address all of our 109 recommendations. In addition, we found that DOD and the military services took action to improve the purchase card program consistent with the requirements of the Bob Stump National Defense Authorization Act for Fiscal Year 2003 and the DOD Appropriations Act for Fiscal Year 2003. Specifically, we found that DOD and the military services had done the following. Substantially reduced the number of purchase cards issued. According to GSA records, DOD had reduced the total number of purchase cards from about 239,000 in March 2001 to about 134,609 in January 2004. These reductions have the potential to significantly improve the management of this program. Issued policy guidance to field activities to (1) perform periodic reviews of all purchase card accounts to reestablish a continuing bona fide need for each card account, (2) cancel accounts that were no longer needed, and (3) devise additional controls over infrequently used accounts to protect the government from potential cardholder or outside fraudulent use. Issued disciplinary guidelines, separately, for civilian and military employees who engage in improper, fraudulent, abusive, or negligent use of a government charge card. In addition, to monitor the purchase card program, the DOD IG and the Navy have prototyped and are now expanding a data-mining capability to screen for and identify high-risk transactions (such as potentially fraudulent, improper, and abusive use of purchase cards) for subsequent investigation. On June 27, 2003, the DOD IG issued a reportsummarizing the results of an in-depth review of purchase card transactions made by 1,357 purchase cardholders. The report identified 182 cardholders who potentially used their purchase cards inappropriately or fraudulently. We believe that consistent oversight played a major role in bringing about these improvements in DOD’s purchase and travel card programs. During 2001, 2002, and 2003, seven separate congressional hearings were held on the Army and Navy purchase and individually billed travel card programs. Numerous legislative initiatives aimed at improving DOD’s management and oversight of these programs also had a positive impact. Another important initiative underway at the department pertains to financial reporting. Under the leadership of DOD Comptroller, the department is working to instill discipline into its financial reporting processes to improve the reliability of the department’s financial data. Resolution of serious financial management and related business management weaknesses is essential to achieving any opinion on the DOD consolidated financial statements. Pursuant to the requirements in section 1008 of the National Defense Authorization Act for Fiscal Year 2002, DOD has reported for the past 3 years on the reliability of the department’s financial statements, concluding that the department is not able to provide adequate evidence supporting material amounts in its financial statements. Specifically, DOD stated that it was unable to comply with applicable financial reporting requirements for (1) property, plant, and equipment, (2) inventory and operating materials and supplies, (3) environmental liabilities, (4) intragovernmental eliminations and related accounting entries, (5) disbursement activity, and (6) cost accounting by responsibility segment. Although DOD represented that the military retirement health care liability data had improved for fiscal year 2003, the cost of direct health care provided by DOD-managed military treatment facilities was a significant amount of DOD’s total recorded health care liability and was based on estimates for which adequate support was not available. DOD has indicated that by acknowledging its inability to produce reliable financial statements, as required by the act, the department saves approximately $23 million a year through reduction in the level of resources needed to prepare and audit financial statements. However, DOD has set the goal of obtaining a favorable opinion on its fiscal year 2007 departmentwide financial statements. To this end, DOD components and agencies have been tasked with addressing material line item deficiencies in conjunction with the BMMP. This is an ambitious goal and we have been requested by Congress to review the feasibility and cost effectiveness of DOD’s plans for obtaining such an opinion within the stated time frame. To instill discipline in its financial reporting process, the DOD Comptroller requires DOD’s major components to prepare quarterly financial statements along with extensive footnotes that explain any improper balances or significant variances from previous year quarterly statements. All of the statements and footnotes are analyzed by Comptroller office staff and reviewed by the Comptroller. In addition, the midyear and end-of- year financial statements must be briefed to the DOD Comptroller by the military service Assistant Secretary for Financial Management or the head of the defense agency. We have observed several of these briefings and have noted that the practice of preparing and explaining interim financial statements has led to the discovery and correction of numerous recording and reporting errors. If DOD continues to provide for active leadership, along with appropriate incentives and accountability mechanisms, improvements will continue to occur in its programs and initiatives. We would like to offer two suggestions for legislative consideration that we believe could contribute significantly to the department’s ability to not only address the impediments to DOD success but also to incorporate needed key elements to successful reform. These suggestions would include the creation of a chief management official and the centralization of responsibility and authority for business system investment decisions with the domain leaders responsible for the department’s various business areas, such as logistics and human resource management. Previous failed attempts to improve DOD’s business operations illustrate the need for sustained involvement of DOD leadership in helping to assure that the DOD’s financial and overall business process transformation efforts remain a priority. While the Secretary and other key DOD leaders have certainly demonstrated their commitment to the current business transformation efforts, the long-term nature of these efforts requires the development of an executive position capable of providing strong and sustained executive leadership over a number of years and various administrations. The day-to-day demands placed on the Secretary, the Deputy Secretary, and others make it difficult for these leaders to maintain the oversight, focus, and momentum needed to resolve the weaknesses in DOD’s overall business operations. This is particularly evident given the demands that the Iraq and Afghanistan postwar reconstruction activities and the continuing war on terrorism have placed on current leaders. Likewise, the breadth and complexity of the problems preclude the Under Secretaries, such as the DOD Comptroller, from asserting the necessary authority over selected players and business areas. While sound strategic planning is the foundation upon which to build, sustained leadership is needed to maintain the continuity needed for success. One way to ensure sustained leadership over DOD’s business transformation efforts would be to create a full-time executive level II position for a chief management official who would serve as the Principal Under Secretary of Defense for Management. This position would provide the sustained attention essential for addressing key stewardship responsibilities such as strategic planning, performance and financial management, and business systems modernization in an integrated manner, while also facilitating the overall business transformation operations within DOD. This position could be filled by an individual, appointed by the President and confirmed by the Senate, for a set term of 7 years with the potential for reappointment. Such an individual should have a proven track record as a business process change agent in large, complex, and diverse organizations—experience necessary to spearhead business process transformation across the department, and potentially administrations, and serve as an integrator for the needed business transformation efforts. In addition, this individual would enter into an annual performance agreement with the Secretary that sets forth measurable individual goals linked to overall organizational goals in connection with the department’s overall business transformation efforts. Measurable progress towards achieving agreed upon goals would be a basis for determining the level of compensation earned, including any related bonus. In addition, this individual’s achievements and compensation would be reported to Congress each year. We have made numerous recommendations to DOD intended to improve the management oversight and control of its business systems investments. However, as previously mentioned, progress in achieving this control has been slow and, as a result, DOD has little or no assurance that current business systems investments are being spent in an economically efficient and effective manner. DOD’s current systems funding process has contributed to the evolution of an overly complex and error-prone information technology environment containing duplicative, nonintegrated, and stovepiped systems. Given that DOD plans to spend approximately $19 billion on business systems and related infrastructure for fiscal year 2004—including an estimated $5 billion in modernization money—it is critical that actions be taken to gain more effective control over such business systems funding. The second suggestion we have for legislative action to address this issue, consistent with our open recommendations to DOD, is to establish specific management oversight, accountability, and control of funding with the “owners” of the various functional areas or domains. This legislation would define the scope of the various business areas (e.g., acquisition, logistics, finance and accounting) and establish functional responsibility for management of the portfolio of business systems in that area with the relevant Under Secretary of Defense for the six departmental domains and the CIO for the Enterprise Information Environment Mission (information technology infrastructure). For example, planning, development, acquisition, and oversight of DOD’s portfolio of logistics business systems would be vested in the Under Secretary of Defense for Acquisition, Technology, and Logistics. We believe it is critical that funds for DOD business systems be appropriated to the domain owners in order to provide for accountability, transparency, and the ability to prevent the continued parochial approach to systems investment that exists today. The domains would establish a hierarchy of investment review boards with DOD-wide representation, including the military services and Defense agencies. These boards would be responsible for reviewing and approving investments to develop, operate, maintain, and modernize business systems for the domain portfolio, including ensuring that investments were consistent with DOD’s BEA. All domain owners would be responsible for coordinating their business systems investments with the chief management official who would chair the Defense Business Systems Modernization Executive Committee and provide a cross-domain perspective. Domain leaders would also be required to report to Congress through the chief management official and the Secretary of Defense on applicable business systems that are not compliant with review requirements and to include a summary justification for noncompliance. As seen again in Iraq, the excellence of our military forces is unparalleled. However, that excellence is often achieved in the face of enormous challenges in DOD’s financial management and other business areas, which have serious and far-reaching implications related to the department’s operations and critical national defense mission. Our recent work has shown that DOD’s long-standing financial management and business problems have resulted in fundamental operational problems, such as failure to properly pay mobilized Army Guard soldiers and the inability to provide adequate accountability and control over supplies and equipment shipments in support of Operation Iraqi Freedom. Further, the lack of adequate transparency and appropriate accountability across all business areas has resulted in certain fraud, waste, and abuse and hinders DOD’s attempts to develop world-class operations and activities to support its forces. As our nation continues to be challenged with growing budget deficits and increasing pressure to reduce spending levels, every dollar that DOD can save through improved economy and efficiency of its operations is important. DOD’s senior leaders have demonstrated a commitment to transforming the department and improving its business operations and have taken positive steps to begin this effort. We believe that implementation of our open recommendations and our suggested legislative initiatives would greatly improve the likelihood of meaningful, broad-based reform at DOD. The continued involvement and monitoring by congressional committees will also be critical to ensure that DOD’s initial transformation actions are sustained and extended and that the department achieves its goal of securing the best performance and highest measure of accountability for the American people. We commend the Subcommittee for holding this hearing and we encourage you to use this vehicle, on at least an annual basis, as a catalyst for long overdue business transformation at DOD. Mr. Chairman, this concludes our statement. We would be pleased to answer any questions you or other members of the Subcommittee may have at this time. For further information about this testimony, please contact Gregory D. Kutz at (202) 512-9095 or [email protected], Randolph Hite at (202) 512-3439 or [email protected], or Evelyn Logue at 202-512-3881. Other key contributors to this testimony include Bea Alff, Meg Best, Molly Boyle, Art Brouk, Cherry Clipper, Mary Ellen Chervenic, Francine Delvecchio, Abe Dymond, Eric Essig, Gayle Fischer, Geoff Frank, John Kelly, Patricia Lentini, Elizabeth Mead, Mai Nguyen, Greg Pugnetti, Cary Russell, John Ryan, Darby Smith, Carolyn Voltz, Marilyn Wasleski, and Jenniffer Wilson. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | GAO has issued several reports pertaining to the Department of Defense's (DOD) architecture and systems modernization efforts which revealed that many of the underlying conditions that contributed to the failure of prior DOD efforts to improve its business systems remain fundamentally unchanged. The Subcommittee on Terrorism, Unconventional Threats and Capabilities, House Committee on Armed Services, asked GAO to provide its perspectives on (1) the impact long-standing financial and related business weaknesses continue to have on DOD, (2) the underlying causes of DOD business transformation challenges, and (3) DOD business transformation efforts. In addition, GAO reiterates the key elements to successful reform: (1) an integrated business transformation strategy, (2) sustained leadership and resource control, (3) clear lines of responsibility and accountability, (4) results-oriented performance, (5) appropriate incentives and consequences, (6) an enterprise architecture to guide reform efforts, and (7) effective monitoring and oversight. GAO also offers two suggestions for legislative consideration that are intended to improve the likelihood of meaningful, broad-based financial management and related business reform at DOD. DOD's senior civilian and military leaders are committed to transforming the department and improving its business operations and have taken positive steps to begin this effort. However, overhauling the financial management and related business operations of one of the largest and most complex organizations in the world represents a huge management challenge. Six DOD program areas are on GAO's "high risk" list, and the department shares responsibility for three other governmentwide high-risk areas. DOD's substantial financial and business management weaknesses adversely affect not only its ability to produce auditable financial information, but also to provide timely, reliable information for management and Congress to use in making informed decisions. Further, the lack of adequate transparency and appropriate accountability across all of DOD's major business areas results in billions of dollars in annual wasted resources in a time of increasing fiscal constraint. Four underlying causes impede reform: (1) lack of sustained leadership, (2) cultural resistance to change, (3) lack of meaningful metrics and ongoing monitoring, and (4) inadequate incentives and accountability mechanisms. To address these issues, GAO reiterates the keys to successful business transformation and offers two suggestions for legislative action. First GAO suggests that a senior management position be established to spearhead DOD-wide business transformation efforts. Second, GAO proposes that the leaders of DOD's functional areas, referred to as departmentwide domains, receive and control the funding for system investments, as opposed to the military services. Domain leaders would be responsible for managing business system and process reform efforts within their business areas and would be accountable to the new senior management official for ensuring their efforts comply with DOD's business enterprise architecture. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
To determine the extent to which the structure of the Promise Neighborhoods program aligns with program goals and how Education selected grantees, we reviewed relevant Federal Register notices, application guidance, and agency information on applicants for fiscal year 2011 and 2012 implementation grants. To determine how Education aligns Promise grant activities with other federal programs, we reviewed documentation on Education’s alignment efforts. To assess Education’s approach to evaluating the program, we reviewed its grant monitoring reports, performance measures, and guidance for data collection. To determine the extent to which Promise grants enabled collaboration at the local level, we used GAO’s prior work on enhancing collaboration in interagency groups as criteria. We compared the Promise grants’ collaboration approaches to certain successful approaches used by select interagency groups and reviewed implementation grantees’ application materials. To learn about grantees’ experiences with the program, we conducted a web-based survey of all planning and implementation grantees nationwide from late August to early November 2013. We received responses from all 48 grantees. We asked grantees to provide information on the application and peer review process, coordination of federal resources, collaboration with local organizations, and results of the planning grants. Because not all respondents answered every question, the number of grantees responding to any particular question will be noted throughout the report. In addition, we conducted site visits to 11 planning and implementation grantees. During these visits, we interviewed five planning grantees and six implementation grantees. Sites were selected based on several factors, such as the type of grant awarded, the location of grantees, and whether they were urban or rural. For all four objectives, we interviewed Education officials, technical assistance providers, and subject matter specialists from the Promise Neighborhoods Institute.(See appendix I for more detail on the scope and methodology.) We conducted this performance audit from February 2013 to May 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Promise Neighborhoods program is a place-based program that attempts to address the problems of children and youth in a designated geographic footprint. The program is designed to identify and address the needs of children in low-performing schools in areas of concentrated poverty by aligning a cradle-to-career continuum of services. The program moves beyond a focus on low-performing schools by recognizing the role an entire community plays in a child’s education (see fig. 1). Place-based initiatives provide communities the flexibility to address their unique needs and interrelated problems by taking into account the unique circumstances, challenges, and resources in that particular geographic area. The Promise program is one of several place-based initiatives at the federal level, but it is the only one focused on educational issues. In addition to Education, the Departments of Justice (Justice), Housing and Urban Development (HUD), and Health and Human Services (HHS) also have grant programs aimed at impoverished neighborhoods. Together, these four agencies and their grant programs form the core of the White House Neighborhood Revitalization Initiative. This initiative coordinates neighborhood grant programs at the federal level across agencies, and identifies and shares best practices. Each agency’s grant program focuses on its respective agency’s core mission, but together, they focus on key components of neighborhood revitalization, education, housing, crime prevention, and healthcare. Generally, the purpose of the Promise grants is to fund individual grantees’ efforts to plan for and create a cradle-to-career pipeline of services based on the specific needs of their communities. The grants are focused on improving student outcomes on 15 performance indicators, chosen by Education. Along with the grantee, partner organizations, funded by federal, state, local, private, or nonprofit organizations, are expected to collaborate to provide matching funds and services. A number of nonprofits and foundations have worked on initiatives to address complex problems in a similarly comprehensive way. Their approach brings together a group of stakeholders from different sectors to collaborate on a common agenda, align their efforts, and use common measures of success. This approach has been described as the collective impact model. The premise of the model is that better cross-sector alignment and collaboration creates change more effectively than isolated interventions by individual organizations. A number of organizations have used this approach to address issues such as childhood obesity and water pollution. Several other cradle-to-career place-based collective impact programs share key characteristics with the Promise program, including Cincinnati’s Strive program and the Harlem Children’s Zone. These collective impact initiatives use a centralized infrastructure and a structured process, including training, tools, and resources, intended to result in a common agenda, shared measurement, and mutually-reinforcing activities among all participants. This centralized infrastructure requires staff to manage technology, communications support, data collection, reporting, and administrative details. The Promise grantees’ role is to create and provide this centralized infrastructure for their communities. The Promise program relies on a two-phase strategy for awarding grants, which includes both one-year planning grants and three- to five-year implementation grants. (See table 1.) Among other things, planning grantees are required to conduct a comprehensive needs assessment of children and youth in the neighborhood and develop a plan to deliver a continuum of solutions with the potential to achieve results. This effort involves building community support for and involvement in developing the plan. Planning grantees are also expected to establish effective partnerships with organizations for purposes such as providing solutions along the continuum and obtaining resources to sustain and scale up the activities that work. Finally, planning grantees are required to plan to build, adapt, or expand a longitudinal data system to provide information and use data for learning, continuous improvement, and accountability. The implementation grant provides funds to develop the administrative capacity to implement the planned continuum of services. Education expects implementation grantees to build and strengthen the partnerships they developed to provide and sustain services and to continue to build their longitudinal data systems. Education awarded most of the 2010-2012 grants to non-profit organizations (38 of 48), eight to institutions of higher education, and two to tribal organizations. Almost all (10 of 12) implementation grantees received planning grants, while two did not. (See fig. 2 for locations of grantees.) (See appendix II for a list of grantees and year of grant award.) The planning and implementation grant activities that Education developed for the Promise program generally align with Education’s goal of significantly improving the educational and developmental outcomes of children and youth in the nation’s most distressed communities. According to Education officials, the planning grant award process enabled them to identify community-based organizations in distressed neighborhoods with the potential to effectively coordinate the continuum of services for students living in the neighborhood. The eligibility requirements, which included matching funds or in-kind donations and an established relationship with the community to be served, helped to ensure that grantees had financial and organizational capacity and were representative of the area to be served. Education developed criteria to evaluate applications and select grantees based on the grantees’ ability to describe the need for the project; the quality of the project design, including the ability to leverage existing resources; the quality of the project services; and the quality of the management plan. Education’s Promise planning grants were intended to enhance the capacity of identified organizations to create the cradle-to-career continuum. The activities required of planning grantees enable grantees and their partners to gain a depth of knowledge about their communities and the communities’ needs, which can increase their capacity to focus on improving educational and developmental outcomes for children and youth throughout their neighborhood. Through a separate competition, Education identified organizations that application reviewers determined were most ready to implement their plans. While acknowledging that the implementation grantees are best positioned to determine the allocation of grant funds, Education expects that grant funds will be used to develop the administrative capacity to implement the planned continuum and that the majority of resources to provide services to students and families will come from other public and private funding sources rather than from the grant itself. This expectation gives the Promise strategies a chance to extend beyond the 5-year life of the grant. Further, the requirement that grantees build a longitudinal data set allows Promise grantees and their partners to review and analyze robust data in real time to make informed decisions about whether to adjust their strategies. The data can also help the grantees and Education learn about the impact of the program. Education identified 10 desired results from implementation of the program, which cover the cradle-to-career age span that Promise Neighborhoods are expected to address. A technical assistance provider stated that the list of desired results help grantees focus on improving educational and developmental outcomes across the entire continuum. (See table 2.) (The indicators that measure progress toward achieving results are listed in Appendix III.) Education’s grantee selection process was generally clear and transparent. However, Education did not communicate clearly to planning grantees about the probability of receiving an implementation grant and its expectations for grantees to continue their efforts without implementation funding. This lack of clarity created challenges for some grantees. Education outlined its selection criteria and how grant applications would be scored in its grant announcements and selected peer reviewers from outside the organization. According to Education officials, the peer reviewers had expertise in various related fields, including community development and all levels of education. Education provided additional training on the application review process. For the planning grant selection, Education divided about 100 peer reviewers into panels of three to review packages of about 10 applications. Afterward, peer reviewers conferred about scores in a conference call. For the first implementation grant selection, Education had a two-tiered peer review process. During the first tier, peer reviewers were divided into panels of three to review approximately seven applications. During the second tier review of the 16 highest scoring applications, panels of reviewers were adjusted so that different reviewers read and scored different applications. For the second implementation grant selection, there was only one round of reviews. Reviewers were asked to review the applications and submit comments before meeting on-site to discuss applications. Education posted the results online, including peer reviewer comments for grantees and a list of applicants with scores above 80 out of 100 points. In our web-based survey of grantees, grantees had mixed views on the clarity of application requirements and the helpfulness of peer reviewer comments. Specifically, 13 of 18 planning grantees who applied unsuccessfully for implementation grants and responded to the relevant survey question said the application requirements were very clear or extremely clear, while 8 of 19 grantees that responded said the same about peer reviewer scores and comments (see fig. 3). The unsuccessful applicants gave somewhat lower marks to the helpfulness of peer reviewer comments in improving their future applications and strengthening their current strategies (see fig. 4). Some of the 11 planning and implementation grantees that we interviewed raised concerns about specific application guidelines, such as how the term “neighborhood” is defined and the length of the application. Specifically, two rural grantees said that the grant application and materials had a few areas that seemed to be more geared to urban or suburban grantees. For example, the term “neighborhood” was somewhat difficult for them to interpret in a rural context. In fact, two rural grantees included multiple towns or counties in their neighborhood footprints. Additionally, two grantees we spoke with had concerns about the implementation grant application’s 50-page recommended maximum for the project narrative. Both organizations limited their narratives to 50 pages, but said they later learned that most of the successful grant recipients had exceeded this limit, often by a large amount. The timing of the grant cycles created either an overlap or a long gap between the two grants. Grantees who applied for the implementation grant in the first cycle after receiving a planning grant had an overlap between executing the first grant and applying for the second grant. According to Education officials these grantees were unable to fully apply the knowledge gained in the planning year to develop their implementation applications. For example, one grantee said having to apply for the implementation grant during the planning year made it difficult to create opportunities for community input into the planning process. On the other hand, one of the four grantees that received an implementation grant 2 years after receiving a planning grant faced challenges sustaining the momentum of its efforts without additional funding. Another grantee in the same situation was able to sustain momentum with a separate grant from a private foundation. Education officials said they became aware of the problems with the timing of the implementation applications a few months into the first planning grant year. However, they said they did not have much flexibility in timing the grant cycles. For example, they said that they needed to allow time for public comment on the grant notification in the Federal Register. In addition, they said that agency budget decisions were delayed that year because the Department was operating under a continuing resolution for over 6 months in fiscal year 2011—the first year implementation grants were awarded. Some grantees also said there was a disconnect between the planning and the implementation grant application processes. Specifically, two officials from the six implementation grantees we visited told us that a high-quality planning year was not nearly as important for obtaining an implementation grant as having someone who could write a high-quality federal grant application. For example, one grantee noted that writing a good implementation grant application was not heavily dependent on information gleaned from the planning process. Another grantee said that the implementation grant application was written by a completely different person who was not involved in planning grant activities. Some grantees who received only planning grants reported in our survey and in interviews that they experienced challenges continuing their work without implementation funds. In addition, two of the five planning grantees we interviewed had concerns with Education’s strategy of awarding few implementation grants compared with the number of planning grants. Education informed grantees there was a possibility they would not receive an implementation grant following the planning grant, but no information was provided about the likelihood of whether this would occur. We found indications that grantees did not fully appreciate that receiving a planning grant would not necessarily result in receiving an implementation grant. Three of the five planning grantees we interviewed stated that they did not have contingency plans for continuing their Promise Neighborhood efforts in the event that they did not receive implementation funding. The lack of contingency planning raises questions about the grantees’ understanding of the probability of receiving an implementation grant. Internal control standards state that management should ensure that effective external communications occur with groups that can have a serious impact on programs, projects, operations, and other activities, including budgeting and financing. To date, Education has awarded 46 planning grants (21, 15, and 10 in 2010, 2011, and 2012, respectively) and 12 implementation grants. Even though all but two implementation grants were awarded to planning grantees, fewer than one-quarter of planning grantees received implementation funding. (See table 3.) Education officials provided several reasons for separating the planning and implementation grants and for not awarding implementation grants to all planning grantees who applied. Officials said that when they awarded the first planning grants, they were not sure which neighborhoods had potential grantees with the capacity to implement a Promise plan. In their view, the planning grants allowed them to invest in the capacity of communities to take on this work, while the implementation grants were only awarded to those that demonstrated they were ready for implementation. Education officials said it was important that grantees demonstrate they have an implementation plan in place before receiving such a large sum of money. In addition, after the first round of implementation grants were awarded, they noted that some applicants did not receive implementation grants because they were not yet competitive—in part because they had applied for the implementation grants before their planning efforts were complete. Finally, in commenting on a draft of this report, Education officials said that in several years, Congress appropriated less funds than were requested, which, they said, affected the number of implementation grants Education awarded. In 2010, both Education’s Federal Register Notice Inviting Applications for planning grants and a related frequently asked questions document informed organizations receiving planning grants that they should not necessarily plan on automatically receiving implementation grants. The frequently asked questions guidance noted that the two types of grants could stand alone. For example, an applicant could receive just a planning grant, consecutive planning and implementation grants, or—if the applicant was further along in the planning process—just an implementation grant. Education officials told us that they viewed the planning grant activities as useful in themselves. For example, they told us that the planning process offers rich data and begins the process of bringing together partners and breaking down silos. They expected that planning grantees that applied for but did not receive implementation funding could continue their efforts without implementation grant funding, using their partners’ pledged matching funds to implement their plans on a smaller scale. They noted that the requirement to develop memoranda of understanding with partners should have signaled that the obligations of the partner organizations were not to be contingent upon receipt of an implementation grant. However, Education did not require grantees to have matching funds in-hand before submitting their applications. Especially in light of the difficult fiscal climate that federal agencies will likely continue to face in the future, we believe that it is important for Education to clearly communicate to grantees regarding expectations for planning and implementation grants. Clear communication and expectations can also help promote more realistic expectations among grantees about future funding opportunities given the fiscal realities of the Promise program over the past 5 years. Grantees who had not received implementation grants were trying to continue their efforts and most reported significant challenges in sustaining momentum. According to our survey, since the end of the planning grant, most planning grantees who did not receive an implementation grant (17 out of 29 that answered the related question) found it very or extremely challenging to maintain funding, 12 out of 29 planning grantees felt that maintaining key leadership positions was very or extremely challenging, and 13 out of 29 planning grantees found that hiring staff was very or extremely challenging. Four of the five planning grantees we interviewed who had not received implementation grants told us that they need to determine how to implement scaled-down versions of programs and services identified in their implementation grant applications. They described challenges continuing their work without implementation funding. For example, three grantees noted that partners had pledged funding as a match for federal dollars in their implementation grant proposal. Without the leverage of implementation grant funds, it was difficult to maintain the proposed funding streams. All of the five grantees we interviewed that had received only planning grants said the planning process was very helpful in building connections and trust and deepening communication among partners, and between partners and the community. Four grantees were concerned, however, that the trust and momentum they had built might dissipate if they were not able to carry out their plans without an implementation grant. In an effort to target its resources and align the Promise program goals with those of other place-based initiatives, the Promise program coordinates closely with a limited number of federal programs within Education and with other federal programs as part of the White House Neighborhood Revitalization Initiative (NRI).The NRI is an interagency coordinating body that aligns place-based programs run by HUD, HHS, Justice, and the Department of the Treasury (Treasury) (see fig. 5). Coordination through NRI is more structured than internal coordination within Education, which, according to Promise program officials, occurs as needed. Liaisons from each grant program meet at biweekly and monthly NRI meetings. They have formed a program integration workgroup to coordinate program development, monitoring, and technical assistance for the grant programs included. For example, they conducted a joint monitoring trip to a neighborhood in San Antonio, Texas that has Promise, HUD’s Choice Neighborhood, and Justice’s Byrne Criminal Justice Innovation grants. In coordinating within Education and with NRI, Education’s efforts are focused on ensuring that grants are mutually reinforcing. These coordination activities include aligning goals, developing common performance measures where there are common purposes, and sharing technical assistance resources in areas where programs address similar issues or fund similar activities. (See table 4.) The Promise program has also participated in another place-based program led out of the White House Domestic Policy Council: the Strong Cities, Strong Communities initiative. This program sends teams of federal officials to work with distressed cities, providing them expertise to more efficiently and effectively use the federal funds they already receive. Education’s Promise program participates in initial on-site assessments of communities. Education staff assisted two of the participating communities by providing education expertise at their request. Promise Zones had to meet a number of requirements, including meeting certain poverty thresholds and having certain population levels. agencies and five other agencies in partnership with state and local governments, businesses, and non-profit organizations. Only areas that already had certain NRI grants or a similar rural or tribal grant were eligible to apply in the first round. As of January 2014, three Promise Neighborhoods implementation sites in San Antonio, Los Angeles, and Southeastern Kentucky were located in designated Promise Zones, which provide additional opportunity for coordination at the federal and local level. The Promise Neighborhoods program does, on occasion, coordinate with other individual federal agencies and programs outside of the NRI, but officials stated that the program is focused on deepening and broadening the communication it has with the five named NRI programs and Promise Zones. Promise Neighborhoods officials explained that they had concerns about spreading their coordination efforts too thinly given the large number of programs grantees may include in their strategies. In addition to Promise grants from Education, individual Promise Neighborhoods have access to a broad range of federal programs from other agencies, including many programs that are not part of NRI. However, Education has not developed an inventory of federal programs that could contribute to Promise program goals that it could share with planning and implementation grantees and use to make its own decisions about coordination across agencies. In recent work examining approaches used by interagency groups that successfully collaborated, we found that an inventory of resources related to reaching interagency goals can be used to promote an understanding of related governmentwide programs. Such inventories are useful in making decisions about coordinating related programs across agency lines and between levels of government, according to officials. We have also found that creating a comprehensive list of programs is a first step in identifying potential fragmentation, overlap, or duplication among federal programs or activities. As shown in table 5, the 12 implementation grantees we surveyed stated that they included a variety of federal resources in their Promise Neighborhoods strategies. AmeriCorps was included in 9 out of 11 implementation grantees’ strategies, followed by Head Start (8 of 12) and Education’s School Improvement Grants (6 of 11). None of these are part of NRI. Few grantees said that NRI programs were part of their Promise strategies. For example, four grantees said that a Choice Neighborhood grant was part of their Promise strategy, and three grantees stated that DOJ’s Byrne program was part of their strategy. Education officials attributed the small number of grantees that use HUD’s Choice program to the fact that few grantees have distressed public housing within their footprint that is eligible for this funding. Although Promise grantees conduct their own inventories of the existing federal and other resources in their neighborhoods in order to develop their strategies, two grantees we spoke with were unaware of some of the other federal programs that could contribute towards their strategies. For example, one implementation grantee we spoke to with concerns about school safety was unaware of DOJ’s Byrne Criminal Justice Innovation grant program. Another planning grantee who completed our survey commented that a list of related federal programs like the one in our survey would be especially useful to grantees who did not receive implementation grants. Education officials with the Promise program told us that sometimes grantees are unaware that the community is benefiting from certain federal programs because programs are renamed as they filter down through the state or local levels. Education officials said they emphasize to grantees the importance of reaching out to key partners to ensure they are aware of other federally funded programs in the neighborhood because their partners may be more knowledgeable about other sources of federal funding. While encouraging grantees to reach out to key partners is helpful, Education, through its coordination with other federal agencies, would likely have more knowledge about existing federal resources. Without a federal level inventory, Education is not well-positioned to support grantee efforts to identify other federal programs that could contribute to Promise program goals. Further, Education lacks complete information to inform decisions about future federal coordination efforts and identify potential fragmentation, overlap, and duplication. While Education is collecting a large amount of data from Promise grantees that was intended, in part, to be used to evaluate the program, the Education offices responsible for program evaluation— the Institute for Educational Sciences (IES) and Office of Planning, Evaluation, and Policy Development (OPEPD)—have not yet determined whether or how they will evaluate the program. One of Education’s primary goals for the Promise program, as described in the Federal Register, is to learn about the overall impact of the program through a rigorous program evaluation. Applicants are required to describe their commitment to work with a national evaluator for Promise Neighborhoods to ensure that data collection and program design are consistent with plans to conduct a rigorous national evaluation of the program and the specific solutions and strategies pursued by individual grantees. We have found that federal program evaluation studies provide external accountability for the use of public resources. Evaluation can help to determine the “value added” of the expenditure of federal resources or to learn how to improve performance—or both. Evaluation can play a key role in strategic planning and in program management, informing both program design and execution. Education requires implementation grantees to report annually on their performance using 15 indicators. The indicators include graduation rates, attendance, academic proficiency, student mobility, physical activity, and perceptions of safety. (See table 11 in appendix III.) Education contracted with the Urban Institute to provide guidance on how to collect data on the indicators, including data sources and survey techniques. According to Urban Institute officials, they used existing, validated measures whenever possible to ensure comparability across programs. Seven of twelve implementation grantees we surveyed said the guidance documents were extremely or very helpful, while four found it moderately helpful and one somewhat helpful. The Urban Institute has analyzed the data on the indicators for the first implementation year (the baseline), but Education has not decided whether it will make the first year’s data public because it was not collected in a consistent manner and not all grantees were able to collect all of the necessary data. According to Promise program officials there were inconsistencies in data collection because guidance was not available until February 2013, 13 months after 2011 implementation grants were awarded and over 1 month after 2012 implementation grants were awarded. Promise officials stated that they will use the performance data to target their technical assistance. They are still working with grantees to develop meaningful targets for the second implementation year. Urban Institute officials noted that these 15 indicators help grantees focus their efforts on the outcomes they are trying to achieve. In addition, Promise grantees are required to develop a longitudinal data system to collect information on the individuals served, services provided in the cradle-to-career continuum, and the related outcomes.are expected to use the longitudinal data to evaluate their programs on an Grantees ongoing basis and make adjustments to their strategies and services, as discussed later in this report. Grantees are also required to provide the longitudinal data to Education, which Education officials said they may use to create a restricted-use data set. However, Education currently does not have a plan for analyzing the data. In commenting on a draft of this report, Education stated it must first conduct a systematic examination of the reliability and validity of the data to determine whether it can be used for a descriptive study and a restricted-use data set. Education further stated that the restricted-use data set would only be made available to external researchers after Education determines that the data quality is adequate and appropriate for research; analyzes the data, taking into account privacy concerns; and determines whether to release its own report. In addition, officials from IES and OPEPD cited limitations and challenges to using the longitudinal data for program evaluation. An official from IES, the entity responsible for all impact evaluations conducted by Education, told us that it is not feasible to conduct an impact evaluation of individual program pieces or an overall evaluation of the Promise approach. The official offered three options for evaluation. IES’ preferred option is to conduct a rigorous impact evaluation with a control group obtained through randomized assignment to the program. However, Promise Neighborhoods are not designed to create such a control group. Another option would be for IES to use students or families who were not chosen to participate in an oversubscribed program as a control group, but an informal poll that IES took at a Promise Neighborhoods conference suggested that there were not a sufficient number of oversubscribed programs. A third option was to develop a comparison group of neighborhoods that did not receive a Promise Neighborhood grant. However, IES officials question whether such an approach would enable them to match neighborhoods that were comparable to Promise neighborhoods at the beginning of the grant period. Finally, IES noted that collecting additional data for a control group could be expensive. Education’s OPEPD is responsible for conducting other types of program evaluations. According to Education officials, it could conduct a more limited evaluation focused on outcomes without demonstrating that they are a direct result of the Promise program, but they have no specific plans to do so. An OPEPD official stated OPEPD is reluctant to commit to a plan because they have not yet seen the data and do not know how reliable or complete it will be. In addition, the official said that OPEPD is unsure about funding and that any comprehensive evaluations are expensive to carry out. By creating a restricted-use data set, OPEPD hopes that other researchers may have the funding to use the data to reach some conclusions about the program. The OPEPD official further explained that no one has ever evaluated a community-based approach like this one and that they hope researchers may have some ideas about how to do so. Researchers at the Urban Institute and within the Promise grantee community have proposed other options for evaluating the program. A researcher at the Urban Institute noted that random assignment is not the right approach for evaluating place-based programs. Instead, the researcher recommends a variety of other options for evaluating such programs, including approaches that estimate a single site’s effect on outcomes and aggregating those outcomes. This differs from the traditional program evaluation approach, which IES has considered, of isolating the effects of an intervention so that its effects can be measured separately from other interventions. While Education recognizes the importance of evaluating the Promise program, they lack a plan to do so. If an evaluation is not conducted, Education will have limited information about the Promise program’s success or the viability of the program’s collaborative approach. The Promise program generally requires grantees to use collaborative approaches. We found that grantees are following approaches consistent with those we have recognized as enhancing and sustaining collaboration with partners. The approaches we have previously identified include: Establishing common outcomes: Establishing common outcomes helps collaborating agencies develop a clear and compelling rationale to work together. Addressing needs by leveraging resources: Leveraging the various human, information technology, physical and financial resources available from agencies in a collaborative group allows the group to obtain benefits that would not be available if they worked separately. Tracking performance and maintaining accountability: Tracking performance and other mechanisms for maintaining accountability are consistent with our prior work, which has shown that performance information can be used to improve results by setting priorities and allocating resources to take corrective actions to solve program problems. The approaches are discussed below and in Tables 6 through 8. Grantees and partners provided examples of how they have collaborated through the Promise grant to deliver services and supports that are intended to improve educational and developmental outcomes. Grantees and their partners focused on delivering services at various steps along the cradle-to-career pipeline, including: Early learning supports: programs or services designed to improve outcomes and ensure that young children enter kindergarten and progress through early elementary school grades demonstrating age- appropriate functioning. K-12 supports: programs, including policies and personnel, linked to improving educational outcomes for children in pre-school through 12th grade. These include developing effective teachers and principals, facilitating the use of data on student achievement and student growth to inform decision-making, supporting a well-rounded curriculum, and creating multiple pathways for students to earn high school diplomas. College and career supports: programs preparing students for college and career success. These include partnering with higher education institutions to help instill a college-going culture in the neighborhood, providing dual-enrollment opportunities for students to gain college credit while in high school, and providing access to career and technical education programs. Family and community supports: these include child and youth physical, mental, behavioral and emotional health programs, safety programs such as those to prevent or reduce gang activity and programs that expand access to quality affordable housing. For examples of the services delivered and outcomes reported by grantees for each part of the cradle-to-career pipeline, see table 9 below. The Promise program has energized the 48 planning and implementation grantees and their partners to tackle the complex challenges facing impoverished neighborhoods together. While grantees said they will continue their efforts to build their Promise Neighborhoods, planning grantees faced challenges in sustaining their work over the long term without implementation grants. Planning grantees, especially those concerned about building trust with their communities and partners, may have been better served if Education had provided a more transparent, realistic picture of the fiscal reality of the Promise program and its potential impact on implementation grant funding. Lack of clear communication about the expectations Education had for planning grantees who did not receive implementation funding made it difficult for these grantees to develop specific plans to continue their efforts without future Promise funds. However, the reported small, yet tangible benefits that some communities pursued during the planning year—such as a safe place for children to play—increased momentum and built trust with community members. Encouraging such “early wins” could help all grantees and their partners build upon and improve their efforts, especially since implementation funding has proven scarce. Additionally, much of the information grantees use about what existing federal, state, and local programs and resources to incorporate into their strategies is gleaned through their needs assessment at the local level. Education has not provided grantees with comprehensive information about other federal resources that may be available to use in their Promise strategies. Education is best positioned to develop and share such an inventory of federal programs that relate to the goals of the Promise program. Without such an inventory, Education may be missing opportunities to better support grantees, find other federal programs for future coordination efforts, and identify potential fragmentation, overlap, and duplication at the federal level. One of the Promise program’s primary goals is to identify the overall impact of its approach and the relationship between particular strategies and student outcomes. Grantees are investing significant time and resources to collect data to assess the program, but Education lacks a clear plan for using it. Without evaluating program, it will be difficult for Education to determine whether it is successfully addressing the complex problem of poor student outcomes in impoverished neighborhoods. Finally, the Promise program is one of several place-based and collective impact programs being implemented across many federal agencies. Given the number of these initiatives, not evaluating the program limits Education and other agencies from learning about the extent to which model is effective and should be replicated. In order to improve grantees’ planning and implementation efforts, increase the effectiveness of grantee efforts to integrate and manage resources, and learn more about the program’s impact, we recommend that the Secretary of Education take the following three actions: 1. Clarify program guidance about planning and implementation grants to provide reasonable assurance that planning grantees are better prepared to continue their efforts in the absence of implementation funding. Additional guidance could include encouraging grantees to set aside a small amount of the grant to identify and deliver early, tangible benefits to their neighborhoods. 2. Develop and disseminate to grantees on an ongoing basis an inventory of federal programs and resources that can contribute to the Promise Neighborhoods program’s goal to better support coordination across agency lines. 3. Develop a plan to use the data collected from grantees to conduct a national evaluation of the program. We provided a draft of this report to the Department of Education for review and comment. Education’s comments are reproduced in appendix IV and are summarized below. Education also provided technical comments, which we incorporated into the final report as appropriate. Education outlined the steps it would take to implement our three recommendations, and provided its perspective on communicating expectations to grantees regarding future funding. Education did not explicitly agree or disagree with our findings. Regarding our finding that Education did not communicate clearly to planning grantees about its expectations for the grants, Education stated that in any given year it does not know and therefore cannot communicate the amount of funding available or the number of grant awards anticipated in the following year. We agree, and have clarified our finding in the report accordingly. Education stated that an early assessment of planning grantees’ likelihood of receiving implementation funding would have been premature. Education noted that although Congress has funded the Promise program for the past 5 years, in 4 of those 5 years it appropriated far less than the President requested, and for the last 3 years the program has essentially been level funded. Education further stated that this underscores the limited control that the program had over the number of implementation grants made. We recognize that federal agencies have faced a difficult fiscal climate over the past few years, particularly for discretionary programs. For that reason—and especially given the level at which the Promise program has been funded for the past 3 years—we believe it is even more important that Education be clear and transparent with planning grantees about historical fiscal realities of the Promise program and the implications this may have on future implementation grants. We also believe this situation highlights the need for planning grantees to have contingency plans, especially given Education’s expectations that grantees continue their efforts even in the absence of implementation funding. We further believe that this also underscores the importance of “early wins” to demonstrate what can be achieved when grantees and their partners work collaboratively, as such demonstrations can encourage them to continue their efforts even without implementation funding. In discussing its perspective on communicating expectations to grantees regarding future funding, Education stated that its Notifications Inviting Applications indicated that future funding was contingent on the availability of funds and that the program’s frequently asked questions document noted that implementation funding was not guaranteed and that planning grantees would have to compete for implementation grants. We believe that our report adequately reflects these communication efforts. However, as we reported, Education did not communicate to planning grantees that it expected them to continue their efforts even in the absence of implementation funding. Nor did Education communicate to implementation grant applicants that it expected them to be able to use their partners’ pledged matching funds even if they did not receive implementation grants. This lack of communication was evidenced by planning grantees’ lack of contingency plans and challenges they faced accessing the pledged matching funds, according to the grantees we interviewed. In response to our first recommendation, Education stated that it would continue to communicate to planning grant applicants that implementation funding is contingent on the availability of funds, and that it would provide more targeted technical assistance to planning grant recipients regarding strategies for continuing grantees’ efforts absent implementation funding. Education also stated that it would clarify to grantees that planning grant funds could be used to achieve early, tangible benefits. Regarding our second recommendation, Education stated that it would work with its technical assistance providers to create a mechanism to distribute a comprehensive list of external funding opportunities, programs and resources on a regular basis to better support the grantees’ implementation efforts. With regard to our final recommendation, Education stated that it will consider options for how and whether it can use the data collected from grantees to conduct a national evaluation. Education stated that as a first step it will conduct a systematic evaluation of the reliability and validity of the data, given issues that we and Education noted about inconsistencies in data collection and privacy concerns. In addition, Education stated that to date, it has not received sufficient funding to support a national evaluation. We agree that conducting evaluations can be costly. However, given that one of Education’s primary goals is to learn about the overall impact of the program through a rigorous program evaluation, we continue to believe that absent an evaluation, it will be difficult for Education to determine whether it is successfully addressing the complex problem of poor student outcomes in impoverished neighborhoods—one of its stated goals. Further, developing an evaluation plan would provide critical information about the resources required to conduct an evaluation, and could better inform future funding requests for such an evaluation. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Education and other interested congressional committees. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at 617-788-0580 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are in appendix V. To better understand grantees’ experiences with the Promise Neighborhoods program, we conducted a web-based survey of all 48 planning and implementation grantees. The survey was conducted from August 23, 2013 through November 7, 2013. We received completed surveys from all 48 grantees for a 100 percent response rate. The survey included questions about the clarity and helpfulness of the application and peer review process, challenges sustaining efforts after the end of the planning grant, coordination of federal resources, collaboration with local organizations and associated challenges, the extent to which local coordination reduced duplication, overlap and fragmentation, if at all, the mechanisms organizations use to track the results of their efforts, the results of the grants, and the helpfulness of Education’s guidance and resources for the program. Because this was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce nonsampling errors, such as variations in how respondents interpret questions and their willingness to offer accurate responses. We took steps to minimize nonsampling errors, including pretesting draft instruments and using a web-based administration system. Specifically, during survey development, we pretested draft instruments with five grantees that received planning and/or implementation grants. In the pretests, we were generally interested in the clarity, precision, and objectivity of the questions, as well as the flow and layout of the survey. For example, we wanted to ensure definitions used in the surveys were clear and known to the respondents, categories provided in closed-ended questions were complete and exclusive, and the ordering of survey sections and the questions within each section was appropriate. We revised the final survey based on pretest results. We took another step to minimize nonsampling errors by using a web-based survey. This allowed respondents to enter their responses directly into an electronic instrument and created a record for each respondent in a data file—eliminating the need for manual data entry and its associated errors. To further minimize errors, programs used to analyze the survey data were independently verified to ensure the accuracy of this work. Because not all respondents answered every question, we reported the number of grantees responding to particular questions throughout the report. In addition, we conducted site visits to 11 Promise grantees. We selected sites based on several factors, such as the type of grant awarded, the location of the grantees, and whether the Promise Neighborhood was urban or rural. The site visits provided opportunities to collect more in- depth information on the program and highlighted different types of grantees and approaches. We visited six implementation grantees in Boston, Massachusetts; Berea, Kentucky; Chula Vista, California; Indianola, Mississippi; Los Angeles, California; and Washington, DC. We visited five planning grantees in Campo, California; Lawrence, Massachusetts; Los Angeles, California; Nashville, Tennessee; and Worcester, Massachusetts. These include one tribal and two rural grantees. We also interviewed Education officials and technical assistance providers, as well as other experts who have worked with Promise grant applicants, such as the Promise Neighborhoods Institute. To determine how well the structure of Education’s Promise Neighborhoods grant program aligns with program goals and how Education selected grantees, using Education’s goals for the Promise program as criteria, we reviewed Education reports on place-based strategies; relevant Federal Register notices; and application guidance and training materials, including both the guidance available to applicants and to the peer reviewers regarding the technical evaluation/grant selection process. We reviewed agency information on applicants for implementation grants in the fiscal year 2011 and 2012 cycles, as fiscal years 2011 and 2012 were the only years in which Education awarded implementation grants. For both cycles, we analyzed application materials and technical evaluation documentation for a subset of implementation grant applicants—those that received planning grants in prior years. We compared the scores in each component of the application for both successful and unsuccessful applicants to identify criteria or factors that accounted for significant variation in total scores. We conducted a limited review of selected peer reviewer comments to gain more insight into the reasons for any differences. We interviewed Education officials about the process that the department used for the selection of both planning and implementation grantees. To determine how the Promise Neighborhoods program coordinated with other Education programs and with the other federal agencies, including those involved in the White House Neighborhood Revitalization Initiative (NRI), we reviewed documentation of the NRI’s efforts and interviewed agency officials participating in the NRI. We also interviewed cognizant officials at other agencies participating in the NRI. To assess Education’s approach to evaluating the success of the grants, we reviewed grant monitoring reports, Education’s performance measures, and related guidance for data collection for this program and interviewed agency officials responsible for evaluation, including technical assistance providers. To determine the extent to which Promise grants enabled collaboration at the local level, we used GAO’s prior work on implementing interagency collaborative mechanisms as criteria. We compared the Promise grants’ collaboration mechanisms to certain successful approaches used by select interagency groups and reviewed implementation grantees’ application materials. Our 11 site visits provided additional insight into how selected grantees align services supported by multiple funding streams and delivered by multiple providers. Using survey responses from all planning grantees, we determined whether they have continued their efforts, whether they have implemented any of their strategies, and what, if any, interim results they have identified, regardless of whether they received implementation grants. Site visits provided illustrative examples of interim benefits and challenges. We conducted this performance audit from February 2013 to May 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Location New York , NY (Harlem) Athens Clarke County Family Connection Inc. Clay, Jackson, and Owsley Counties, KY Boys & Girls Club of the Northern Cheyenne Nation Northern Cheyenne Reservation, MT Community Day Care Center of Lawrence, Inc. New York , NY (Brooklyn) United Way of Central Massachusetts, Inc. New York, NY (Brooklyn) New York, NY (Queens) New York, NY (Brooklyn) In addition to the contact named above, Elizabeth Sirois, Assistant Director; Jacques Arsenault; Aimee Elivert; and Lara Laufer made key contributions to this report. Also contributing to this report were James Bennett, Deborah Bland, Mallory Barg Bulman, Holly Dye, Alex Galuten, Jean McSween, Matthew Saradjian, and Sarah Veale. | Education's Promise Neighborhoods program is a competitive grant program with goals to improve educational and developmental outcomes for children in distressed neighborhoods. The grants fund community-based organizations' efforts to work with local partners to develop and evaluate a cradle-to-career continuum of services in a designated geographic footprint. As it is one of several federal programs using this model GAO was asked to review the program. This report examines: (1) the extent to which Education's strategy for awarding grants aligns with program goals; (2) how Education aligns Promise Neighborhoods efforts with other related programs; (3) how Education evaluates grantees' efforts; and (4) the extent to which grants have enabled collaboration at the local level, and the results of such collaboration. GAO reviewed Federal Register notices, applications, and guidance; surveyed all 48 grantees on the application process, coordination of resources, collaboration, and early results; visited 11 grantees selected based on geography and grant type; and interviewed Education officials and technical assistance providers. The Department of Education (Education) used a two-phase strategy for awarding Promise Neighborhoods (Promise) grants, and aligned grant activities with program goals. Education awarded 1-year planning grants to organizations with the potential to effectively align services for students in their respective neighborhoods. Planning grants were generally intended to enhance the grantees' capacity to plan a continuum of services. Through a separate competition, Education awarded 5-year implementation grants to organizations that demonstrated they were most ready to implement their plans. However, Education did not communicate clearly to grantees about its expectations for the planning grants and the likelihood of receiving implementation grants. As a result, some grantees experienced challenges sustaining momentum in the absence or delay of implementation grant funding. The Promise program coordinates with related federal efforts primarily through a White House initiative that brings together neighborhood grant programs at five federal agencies. The Promise program's efforts are focused on ensuring that grants are mutually reinforcing by aligning goals, developing common performance measures, and sharing technical assistance resources. While Promise grantees incorporate a wide range of federal programs in their local strategies, Education coordinates with a more limited number of federal programs. Officials told us that they do this to avoid spreading program resources too thin. Further, Education did not develop an inventory of the federal programs that share Promise goals, a practice that could assist grantees; help officials make decisions about interagency coordination; and identify potential fragmentation, overlap, and duplication. Education requires Promise grantees to develop information systems and collect extensive data, but it has not developed plans to evaluate the program. Specifically, implementation grantees must collect data on individuals they serve, services they provide, and related outcomes and report annually on multiple indicators. However, Education stated it must conduct a systematic examination of the reliability and validity of the data to determine whether it will be able to use the data for an evaluation. Absent an evaluation, Education cannot determine the viability and effectiveness of the Promise program's approach. The Promise grant enabled grantees and their partners to collaborate in ways that align with leading practices GAO previously identified for enhancing collaboration among interagency groups including establishing common outcomes, leveraging resources, and tracking performance. For example, Education required grantees to work with partners to develop common goals and a plan to use existing and new resources to meet identified needs in target areas. Grantees were also required to leverage resources by committing funding from multiple sources. Implementation grantees were required to collect and use data to track performance. Some planning grantees used a leading collaborative strategy not required by Education that produced early benefits. For example, several grantees and partners told us they completed easily achievable projects during the planning year to help build momentum and trust. Grantees told us that collaboration yielded benefits, including deeper relationships with partners, such as schools, as well as the ability to attract additional funding. However, grantees also said they faced some challenges collaborating with partners, particularly in overcoming privacy concerns related to data collection. GAO recommends that Education communicate grant expectations more clearly, identify federal resources that can contribute to the program's goals, and develop a strategy for evaluation. In commenting on a draft of this report, Education outlined the steps it will take to respond to recommendations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Board is responsible for overseeing a complex broadcast environment which spans 5 broadcast entities with varying missions, 84 discrete language services, changing consumer habits and preferences, and a technology environment that presents constant new challenges and opportunities. The Board currently oversees a staff of almost 3,200 and a worldwide network of leased communication satellite services and 38 owned or leased transmission stations. The Board oversees the broadcast of almost 2,000 hours of original (not rebroadcasts) broadcast material each week. The Board estimates that the Voice of America’s broadcasts alone reach a worldwide listening audience of 91 million people each week. Radio Free Europe/Radio Liberty broadcasts reach an estimated 16 million listeners each week. Radio Free Asia and Radio/TV Marti have difficulty obtaining reliable audience estimates due to the closed nature of target broadcast countries. These audiences are reached through a variety of means, including direct radio and television broadcasts from U.S.-owned or -leased transmitters, local rebroadcasters (known as affiliates) who carry U.S. international broadcasting content on their stations, and the Internet. The U.S. international broadcasting budget for fiscal year 2000 is about $420 million. The Board, the Voice of America, Radio/TV Marti, and Worldnet are federal entities and receive funding directly from Congress. Radio Free Europe/Radio Liberty and Radio Free Asia operate as independent, nonprofit corporations and are funded by grants from the Board. The Board’s current organizational structure is illustrated in figure 1. While this figure shows a reporting relationship from the Voice of America, Worldnet, and Radio/TV Marti to the Director of the International Broadcasting Bureau, these broadcast entities have a direct reporting relationship with the Board regarding all programming issues. The Acting Director of the International Broadcasting Bureau told us that his organization provides consolidated technical and support services to client broadcasters; however, programming decisions are handled by the respective broadcast entities and the Board. As noted earlier, the central focus of U.S. international broadcasting is on reaching audiences that are underserved by their local media. According to Freedom House’s year 2000 survey of press freedom, most countries rated as “not free” are located in Africa, the Middle East, and Asia (see app. I for a reproduction of Freedom House’s current world map of press freedom). While all five broadcast entities share the core mandate of reaching underserved populations, a key distinction among the entities is that the Voice of America and Worldnet broadcast to a global audience, while Radio Free Europe/Radio Liberty, Radio Free Asia, and Radio/TV Marti serve as “surrogate” broadcasters in their respective regions and substitute for local media in countries where a free and open press is deemed not to exist or has not been fully established. In addition to adhering to a global mission for U.S. broadcasting, each broadcast entity has its own broadcast mission. As described in public documents and by Board officials, the Voice of America provides accurate and credible international, regional, and country-specific news to a global audience, with a particular emphasis on supplying information relating to the United States. However, in Africa where the Voice of America serves a surrogate role, greater emphasis is given to news of local interest. The Voice of America meets its mandate to broadcast the U.S. position on various foreign policy matters by including the views of U.S. officials in its regular programs and through daily editorials that are identified as representing the views of the U.S. government. It also broadcasts a number of public affairs programs which focus on discussions of U.S. policy by policymakers and experts. Radio Free Europe/Radio Liberty focuses on providing regional and local news to emerging democracies in Central Europe and the former Soviet Union, and to Iran and Iraq. Radio Free Asia and Radio/TV Marti concentrate on providing news of local interest to audiences in Asia and Cuba, respectively, who generally do not have access to a free and open press. Figure 2 shows the regional coverage of the Voice of America, Radio Free Europe/Radio Liberty, Radio Free Asia, and Radio/TV Marti. Shortwave broadcasting has dominated the history of U.S. international broadcasting for over 50 years. Over the past decade, however, the range of media options available to many listeners around the world has expanded to include local AM/FM programming, television, and the Internet. This diversified media environment has greatly increased the complexity of the strategic decisions the Board faces. These transmission modes and certain issues surrounding their use are described in appendix II. The Board responded to the $75-million funding cap placed on Radio Free Europe/Radio Liberty and related cost-cutting expectations by relocating to virtually rent-free quarters in Prague, Czech Republic; reducing staff; and forming local broadcast partnerships in two cases. The Board achieved further savings by consolidating Radio Free Europe/Radio Liberty and Voice of America broadcast schedules, consolidating Radio Free Europe/Radio Liberty and Voice of America transmission operations under the International Broadcasting Bureau, and implementing digital sound recording and editing technology in Prague. One key cost-cutting action that has not been implemented was the original expectation in the 1994 act that Radio Free Europe/Radio Liberty would receive private rather than public funding after the end of calendar year 1999. Based on the results of analysis that the Board conducted, the Board concluded that privatization was not a feasible option due to the lack of tangible business assets (such as transmission facilities or broadcast frequencies) of interest to commercial buyers. The Foreign Relations Authorization Act for Fiscal Years 2000 and 2001 (sec. 503 of App. G of P. L. 106-113) amended the original expectation regarding privatization to require that broadcast operations to a given country should be phased out when there is clear evidence that democratic rule has been established and that balanced, accurate, and comprehensive news and information is widely available. In line with congressional expectations, Radio Free Europe/Radio Liberty reduced its budget from $208 million in fiscal year 1994 to approximately $71 million in fiscal year 1996 by taking the following actions. In 1995, Radio Free Europe/Radio Liberty relocated its headquarters from Munich, Germany, to quarters in Prague, Czech Republic, provided by the Czech Republic as a public service. In conjunction with the move to Prague, Radio Free Europe/Radio Liberty reduced its total staffing by almost 1,200 individuals, or almost 75 percent of its workforce. Radio Free Europe/Radio Liberty and Voice of America officials coordinated their respective broadcast schedules and eliminated over 300 weekly broadcast hours in overlapping and duplicative programming. The Polish and Czech language services were reconstituted as separate, nonprofit corporations. Radio Free Europe/Radio Liberty transmission facilities were turned over to the International Broadcasting Bureau in 1995 in connection with the consolidation of engineering and technical operations under the Bureau. Prior to this consolidation, Radio Free Europe/Radio Liberty controlled a network of six transmission stations located in Germany, Portugal, and Spain. The two stations in Portugal were closed as a result of the consolidation. International Broadcasting Bureau officials estimate that the consolidation of engineering and technical operations initially resulted in more than $32 million in annual recurring savings and that current annual savings have grown to more than $50 million. A digital sound recording and editing platform was installed in connection with the move to Prague. This technology, under appropriate circumstances, allows one individual to produce a radio broadcast that previously would have required the services of an announcer, a producer, and a sound technician using the analog recording and editing technology that had been used in Munich. One Radio Free Europe/Radio Liberty official noted that approximately 75 percent of the station’s output lent itself to the streamlined mode of production enabled by digital technology. The Board completed its first annual language service review in January 2000 and plans to use the results of this review to strategically reallocate approximately $4.5 million in program funds across broadcast regions on the basis of priority and impact ratings assigned to each language service. The priority ratings reflected a number of factors, including the language service’s contribution to furthering U.S. strategic interests, audience size, and other variables. The language service’s impact was based on the mass audience size and the number of “elite” (that is, government and other influential decisionmakers) listeners reached. The Board plans to use next year’s language service review to examine the issue of duplication in program content among the Voice of America and surrogate language services. We also found overlap in overseas news-gathering resources among broadcast entities. This is a potentially important duplication issue that the Board has not reviewed. We raised a similar issue in our 1996 report reviewing potential budget reduction options. Board officials explained that a comprehensive language service review was not completed until January 2000 because the Board lacked adequate audience research on the number and type of listeners for such a review. Starting in 1997, the Board increased the budget devoted to audience research and in 1999 tasked the International Broadcasting Bureau’s Office of Strategic Planning with developing a comprehensive set of program and performance data to be used as the basis for the comprehensive review of language services. Board members assigned priority and impact (audience) ratings to each language service as a basis for reallocating resources. The evaluation criteria used for the priority ratings included potential audience size, U.S. strategic interests, press freedom, economic freedom, and political freedom. For example, a service’s contributions to furthering U.S. strategic interests was scored on the basis of inputs received from a variety of sources, including the White House, the National Security Council, the State Department, and applicable congressional Committees. For the impact ratings, the Board focused on audience size and composition as key performance measures. The Board also evaluated other data, such as the language service’s program quality, operating budget, broadcast hours, signal strength, and affiliate stations, to identify approaches for increasing listening rates in selected countries. Audience data were based on research conducted by the International Broadcasting Bureau’s Office of Audience Research and the InterMedia Survey Institute, which provided data on both audience size and elite listening rates. Appendix III contains further details on the criteria and related processes used to support the Board’s language service review process. The Board used the language service evaluation criteria to develop priority/impact ratings for 69 of the Board’s 84 language services. As shown in table 1, the Board used these ratings to develop a matrix that identified higher priority/higher impact services, higher priority/lower impact services, lower priority/higher impact services, and lower priority/lower impact services. The Board intends to use this information to strategically reallocate approximately $4.5 million in language service funds from emerging democracies in Central and Eastern Europe to several African countries and selected countries in other regions. The review resulted in 21 language service reduction recommendations, 15 recommended service enhancements, and a call for the further review of seven low-performing and five duplicate language services. Language services rated as higher priority were concentrated in countries with a large potential listening audience; low press, political, and economic freedom; and high strategic interest to the United States. Higher and lower impact scores were determined on the basis of percentage weekly listening rates for both mass and elite audiences. Services with listening rates below 5 percent for mass listeners and 15 percent for elite listeners were rated as having lower impact. Services that ranked above this threshold were rated as having higher impact. According to the Board, next year’s language review will include an assessment of overlapping language services among the five U.S. broadcast entities. Board officials told us that the strategy of duplicating language services has been designed to allow U.S. international broadcast entities to achieve their respective missions by offering different program content in the same language. Nonetheless, the Board said in a written evaluation of this year’s language service review that it is essential that the Board revisit the respective roles of the broadcasting services in light of evolving foreign policy and geopolitical and budget realities in the new century. The Board intends to use the language service review next year to look at program duplication between the Voice of America and surrogate language services, such as broadcasts to countries of the former Soviet Union, and to determine whether this overlap effectively serves U.S. interests on a country-by-country basis. Figure 3 shows those languages where both the Voice of America and a surrogate service broadcast in the same language. While the Board intends to review the issue of program content duplication next year, it does not expect to explicitly review the duplicate news resources maintained by broadcast entities overseas. The Voice of America, Radio Free Europe/Radio Liberty, and Radio Free Asia each maintain field offices and freelance journalists in their respective regions. Voice of America resources overlap with those deployed by Radio Free Europe/Radio Liberty and Radio Free Asia in their respective regions. For example, Radio Free Europe/Radio Liberty has a combined total of about 700 bureau staff and freelance journalists covering its broadcast area. The Voice of America has a combined staff of about 150 in the same region. In addition to the issue of overlap, broadcasting officials noted that news-gathering resources are not shared across broadcast entities. For example, one Voice of America language Division Director noted that news feeds from Voice of America overseas bureaus are not shared with Radio Free Asia and that Radio Free Asia news feeds are not shared with the Voice of America. The Division Director said “They do their work, and we do ours.” A Radio/TV Marti employee noted that neither the Voice of America nor Radio Free Europe/Radio Liberty share relevant news items of interest to Radio/TV Marti listeners. As an example, news from Russia is not directly available to the station, because Radio/TV Marti does not have overseas bureaus or freelance journalists. We reported on a similar issue in our 1996 report on budget reduction options for the U.S. Information Agency. In our report, we noted areas where elimination of existing overlap could yield management improvements and cost reductions. One area we highlighted was the potential for further consolidation of overseas news bureaus and other broadcasting assets. Our report cited the overlap in news-gathering resources deployed by the Voice of America and Radio Free Europe/Radio Liberty in Moscow as an example of a potential area for consolidation. Table 2 provides details on the number of bureaus, bureau staff, and freelance journalists deployed by each broadcast entity along with related fiscal year 2000 funding data. The need to manage overseas resources effectively is heightened by the fact that several broadcasting officials commented they do not have adequate news-gathering resources and that product quality has suffered as a result. For example, a Radio/TV Marti official told us that a lack of resources has prevented the station from sending journalists to domestic locations outside of the Miami area and overseas to report on news stories of interest to the Cuban people. A Radio Free Asia language Director noted that her service has only $500 a month to pay for reports from freelance journalists that cost $50 to $100 per report. She noted that this level of funding is not sufficient to produce original and up-to-date programming. Radio Free Asia officials have since told us that freelance budgets have been adjusted to fully fund all language services’ projected requirements for the remainder of fiscal year 2000. The Board has not yet developed a strategic planning and performance management system that provides a high level of assurance that resources are being used in the most effective manner possible. The key components of this system are Results Act planning, the annual language service review, and the program reviews of individual language services. The Board’s fiscal year 2001 Results Act performance plan is deficient because of missing or imprecise performance goals and indicators and a lack of key implementation strategies and related resource requirements. In addition, the lack of a standard program review approach and audience goals for individual language services limits the usefulness of the program reviews that the broadcast entities conduct to assess the content and presentation of their individual language service programs. As a newly independent federal entity, the Board has full responsibility for implementing its strategic planning and performance management system. A key component of such a system is Results Act planning. Under the Results Act, executive agencies are required to prepare 5-year strategic plans that set the general direction for their efforts. Agencies then develop annual performance plans that establish the connections between long-term strategic goals outlined in the strategic plan and the day-to-day activities of program managers and staff. Finally, the act requires that each agency produce an annual performance report on the extent to which it is meeting its annual performance goals and the actions needed to achieve or modify those goals that have not been met. Board officials pointed out that they have made considerable progress in implementing a strategic planning and performance management system and that they submitted a performance report in March 2000 as required. The Board’s fiscal year 2001 performance plan includes two strategic objectives that are not supported by accompanying performance goals and indicators. First, the performance plan lists encouraging the development of a free and independent media as a strategic objective. This reflects one of the objectives embodied in the 1994 Broadcasting Act that calls for the training and technical support for independent indigenous media through government agencies or private U.S. entities. The second strategic objective lacking supporting performance goals and indicators relates to the Board’s need for comprehensive and up-to-date audience research data. Again, the 1994 Broadcasting Act stipulates that U.S. international broadcasting efforts should be based on reliable audience research data. The Board recognizes that its performance plan has some limitations and has formed a Results Act indicators review team to address them. Table 3 provides an overview of the Board’s fiscal year 2001 performance plan that was included with the agency’s fiscal year 2001 budget submission to Congress. This performance plan supports the Board’s stated mission of using U.S. international broadcasting to encourage the development and growth of democratic values in support of the diplomatic, humanitarian, and economic goals of the United States. The array of programs and accurate information that U.S. international broadcasting strives to provide foreign audiences worldwide are intended to help people understand democratic ideals, civil governance, free market economics and trade, and respect for the rule of law. Within this context, while a number of performance goals and indicators are used to assess the extent to which U.S. international broadcasting is achieving its mission, Board officials told us that audience size is the most important performance goal and indicator. Table 3 shows the strategic objectives and the performance goals and indicators contained in the Board’s fiscal year 2001 performance plan. Of the performance goals and indicators shown in table 3, Board officials have identified audience size as the most important performance goal and indicator for assessing to what extent U.S. international broadcasting is achieving its mission. Audience size provides an indicator of how many people around the world are tuning in to information intended to help them understand democratic ideals, civil governance, and the rule of law. However, the Board uses only global audience size estimates by broadcast entity to set performance goals and track performance. For example, the fiscal year 2001 performance plan lists the Voice of America’s current listening audience at 91 million and sets a performance target of 92 million for fiscal year 2001. A January 1999 memo provided instructions on preparing submissions to the fiscal year 2001 performance plan; it invited units to suggest potential program enhancements and provide a memo describing the impact these enhancements would have on such performance measures as audience size. The instructions also called for a description of how the actual impact of such program enhancements would be measured. However, this guidance did not discuss the systematic establishment of specific audience targets by language service or the method for monitoring such targets to provide meaningful performance data (such as the number of language services achieving target performance levels each year) for inclusion in the Board’s annual performance plan. The Board acknowledges in its performance plan that changes in estimated global listening audiences from year to year do not necessarily indicate a “genuine” increase in listeners because better survey techniques may simply have identified additional listeners not included in earlier estimates. In addition, the International Broadcasting Bureau’s Office of Research reported that the Voice of America’s global estimate should be taken only as a rough indication of the number of listeners, with a potentially wide margin of error. The report further noted that “most of Voice of America’s audience is heavily concentrated in a small number of countries; as a result, exclusive reliance on the global estimate as a measure of effectiveness may obscure important changes that occur from year to year at the regional or country level.” Radio Free Asia officials have pointed out that Radio Free Asia is relatively new and has no effective means to advertise its services in the closed target countries. Further, these officials said that it is very difficult to obtain reliable audience size estimates. Thus, the officials believed that audience size would not be an adequate measure of Radio Free Asia’s performance at this time. A second problem with this key performance indicator is that the performance plan makes no distinction between mass versus elite (that is, government and other influential decisionmakers) audiences and only references mass listening audiences in its strategic objectives and performance goals. The distinction between these two basic audiences has major implications for the Board with regard to setting strategic objectives and performance goals, establishing and refining broadcast strategies, and allocating resources in the most effective manner possible. A senior Voice of America official told us that the agency’s biggest challenge is analyzing its programming language by language and determining what matches the needs of the various audiences the Voice of America is trying to reach. The target audience can also change over time. For example, the Voice of America’s audience in Africa has typically been made up of an elite group of 40- to 50-year-old males in political or civil service leadership positions. Now, one official told us, the African language services need to attract more of a mass audience in order to reach future leaders. According to the Results Act, agency performance plans should describe the operational processes, skills, technology, and other resources an agency will need to achieve its performance goals. The plans should describe both the agency’s existing strategies and resources and any significant changes to them. We found that the Board’s fiscal year 2001 plan does not discuss such strategies or resource requirements for its ongoing initiatives. For example, the plan does not include a discussion of the Board’s Internet deployment plan. This is a concern, given the complex issues the Board faces as it attempts to integrate the Internet with the more traditional radio and television distribution efforts of five discrete broadcast entities in an era of rapid political and technological changes and shifting consumer demands and preferences. The lack of a discussion of the role and significance of the International Broadcasting Bureau’s deployment of digital production technology for the Voice of America is another concern. Under the title of the “Digital Broadcasting Program,” the digital production technology effort is being overseen by the Board. This $57-million effort to upgrade the Voice of America’s operations from an analog mode to a digital one will allow, in certain cases, a single staff member to perform the work previously assigned to an announcer, a producer, and a sound technician. Radio Free Europe/Radio Liberty and Radio Free Asia have already implemented digital production systems, and Radio/TV Marti expects to have its digital project completed by December 2001. However, according to a senior International Broadcasting Bureau official, the Digital Broadcasting Program, which was initiated in 1995, was supposed to be finished within a 3- to 4-year time period predicated on the project’s receiving funding at the planned levels. Actual funding has been extended over a longer period of time, and a definitive end-point for the project remains to be established. The Board’s performance plan does not highlight the importance of this project to the Voice of America’s effectiveness, the specific strategies being followed to ensure successful implementation, the impact budget shortfalls will have on its completion, and the projected cost savings (in terms of long-term staffing needs, for example) to be derived from full implementation of the project. The usefulness of annual program reviews of individual language services is hampered by (1) a lack of consistency in how program quality scores—a key component of the program review process—are developed across broadcast entities and (2) the lack of audience size and composition targets, which would help focus language service planning efforts. The International Broadcasting Bureau conducts program reviews for the Voice of America and Radio/TV Marti, while Radio Free Europe/Radio Liberty and Radio Free Asia conduct their own reviews. Program reviews evaluate a number of factors, including audience size, signal strength, affiliates management, and program content and presentation. The latter factor is referred to as “program quality.” Program reviews culminate with a written report with recommendations for improving operations in one or more of the previously listed areas. Board officials acknowledge that there is variability in how program reviews are conducted across broadcast entities. Specifically, they noted that a consistent approach to evaluating program quality remains to be established. Program quality refers to content and presentation issues such as program balance and objectivity, program pacing, use of musical bridges between program segments, and the quality of the announcer’s voice. One key methodological difference that exists today is that some broadcast entities use external experts and in-country listening panels in assessing program quality, and others do not. For example, the International Broadcasting Bureau relies on internal personnel to develop program quality assessments. Voice of America language program directors generally noted that these assessments were not that rigorous and would benefit from input from outside experts, such as journalists and academic specialists. In contrast, Radio Free Europe/Radio Liberty does utilize external experts and in-country listening panels in its program quality review process. Funding permitting, Board officials noted that they eventually intend to move all program reviews toward a uniform process and methodology that incorporates the views of external experts and in-country listening panels in assessing program quality. Finally, we noted that program reviews center on discussions of program operations and a general desire to improve language service performance without the benefit of focussing on specific performance targets such as audience size and composition. Board officials noted that performance targets for individual language services could be established at the Results Act and annual language service review levels and these targets could form the focal point for program reviews. Focused program reviews could, in turn, influence and modify the next iteration of performance targets established at the Results Act and annual language service review levels. The Board has taken actions to fulfill the mandates and expectations contained in the U.S. International Broadcasting Act of 1994. It has implemented the steps necessary to reduce Radio Free Europe/Radio Liberty’s budget to below the $75 million ceiling established by Congress. The Board established a language service review process that is designed to realign budget resources strategically on an annual basis. Finally, the Board has developed a strategic planning and performance management system that consists of Results Act planning, the annual language service review, and the program reviews of individual language services. This system is intended to help ensure that U.S. international broadcasting resources are used in the most effective manner possible. Despite the Board’s overall progress and its continuing efforts to further refine its strategic planning and performance management system, the broadcast entities could benefit from the closer integration of international broadcast missions and strategic objectives and more clearly defined performance goals and indicators as called for by the Results Act. For example, the Board’s global audience goal, in particular, is less useful as a key indicator of broadcast effectiveness than summary data on the success of language services in achieving individual audience size and composition targets. Further, the performance plan lacks an implementation strategy and related resource requirements for the Board’s key initiatives. Addressing these strategic planning issues could help ensure that resources are managed more effectively with more clearly defined results. The Board’s current plans for its next language services review do not include a plan to analyze the deployment of field news-gathering resources among the broadcast entities. Such an analysis could potentially identify areas of unnecessary overlap, which would allow them to redirect resources to areas needing more news coverage. A lack of adequate news coverage ultimately diminishes the quality of U.S. broadcast efforts and potentially affects the size and nature of the listening audience, a key performance indicator. Finally, annual program reviews conducted for individual language services do not employ a consistent approach to assessing program quality and do not focus on specific audience size and composition targets. A standard review approach, which incorporates both outside experts and in-country listening panels, would increase the overall value of program quality assessments and allow meaningful comparisons among individual language services and among broadcast entities. Improved program quality measures would also benefit the annual language service review process and the Board’s Results Act planning, each of which incorporate program quality as a performance measure. Establishing specific audience targets for each language service would enable program review teams to develop action plans listing the specific steps and resources needed to achieve any audience share and composition goals established at the Results Act level. These action plans and related resource discussions could be incorporated in both Results Act planning and the annual language service review process which is the Board’s primary vehicle for assessing the distribution of broadcasting resources. To strengthen the Board’s management oversight and provide greater assurance that international broadcasting funds are being effectively expended, we recommend that the Chairman of the Broadcasting Board of Governors include in the Board’s performance plan a clearer indication of how its broadcast missions, strategic objectives, performance goals, and performance indicators relate to each other; and establish audience and other goals, as appropriate, at the individual language service level; include implementation strategies and related resource requirements in its performance plan; analyze overseas news-gathering networks across its broadcast entities to determine if resources could be more effectively deployed; and institute a standardized approach to conducting program quality assessments and require that program reviews produce a detailed action plan that responds to specific audience size and composition targets established at the Results Act and annual language service review level. The Broadcasting Board of Governors provided written comments on a draft of this report. The Board stated that the report is fair and accurate, and the Board concurred with our recommendations. The Board said that some actions currently underway will serve to partially implement the recommendations and that it will implement additional actions in the future. For example, the Board has launched a review of its existing performance plan that will include drawing clearer linkages between broadcast missions, strategic objectives, and performance goals. The Board also intends to establish audience and other goals, as appropriate, at the individual language service level. The Board agreed with our recommendation that it analyze its overseas news-gathering network next year. However, the Board said that an analysis of its overseas news-gathering resources would be more useful as a stand-alone analysis rather than as part of the annual language service review as we recommended. We recognize the need for such flexibility and modified our recommendation accordingly. The Board expressed concern that the information we provided on U.S. international broadcasting and the British Broadcasting Corporation was unfair and presented a misleading picture of two very different organizations (see app. IV). The Board noted that U.S. international broadcasting has been charged with a far more complex mission, which includes conveying the views of the U.S. government and functioning as a surrogate broadcaster in areas where gaining access to target audiences is difficult. The Board added that caution was needed when comparing total operating costs, listening audience size, the number of language services, and the implied cost per listener, due to the significant differences between the two organizations. To address the Board’s concerns, we modified the introduction to appendix IV. We also adjusted U.S. budget data to remove television production and transmission costs which are not included in the British Broadcasting Corporation budget figure. However, we believe that providing information on the world’s top two international broadcasters is useful and serves to illustrate both the similarities and differences in how these two organizations conduct their business. Further, discussions with U.S. broadcast staff and our review of internal documents indicate that the Board considers the British Broadcasting Corporation to be a key competitor and closely tracks its activities in selected broadcast markets around the world. The comments provided by the Board are reprinted in appendix VI. The Board also provided technical comments in attachment B, which we have incorporated in the report as appropriate. We are sending copies of this report to the Honorable Marc B. Nathanson, Chairman, Broadcasting Board of Governors; and to interested congressional committees. Copies will also be made available to others upon request. If you or your staff have any questions concerning this report, please call me at (202) 512-4268. Other GAO contacts and staff acknowledgments are listed in appendix VII. The core mandate of U.S. international broadcasting is to reach audiences in countries where a fair and open press does not exist or has not been fully established. The Board’s primary basis for assessing the status of press freedom around the world is the annual survey of press freedom conducted by an organization called Freedom House, which is partly supported by U.S. grant funds. As shown in figure 4, Freedom House’s most recent survey shows that the most severely underserved audiences are concentrated in Africa, the Middle East, and Asia. U.S. international broadcasting operates within the context of a complex and evolving transmission environment. Each of the key broadcast methods the United States uses is described in the following section and in more detail in an August 1999 International Broadcasting Bureau study. This transmission mode utilizes the reflective properties of the ionosphere to carry an analog radio signal to listeners typically up to 4,200 miles away or even farther under some circumstances. In many situations the quality of shortwave transmissions can be comparable to that of AM/FM broadcasts. However, over long distances, where shortwave is so valuable, transmission quality can vary considerably. Despite its drawbacks, shortwave remains the primary transmission medium (and sometimes the only option) for international broadcasters seeking to reach target populations where press freedom is completely or largely restricted. One problem with shortwave broadcasts is that countries, such as China, Vietnam, and Cuba, attempt to block U.S. broadcast signals. To counteract these jamming activities, international broadcasters use very powerful transmitters, operating from multiple locations, on multiple frequencies. This increases the costs of shortwave broadcasting relative to most other transmission mediums but it still remains an economical medium for reaching large areas. Shortwave broadcasting is currently carried on a network of 22 U.S.-owned and 16 leased transmission facilities. However, U.S.-owned transmitters in the Philippines and Thailand currently cannot be used for Radio Free Asia broadcasts because of host government prohibitions. The future of shortwave radio could be significantly affected by the development of digital shortwave, which offers several advantages over the current analog form of shortwave transmission. Digital shortwave is capable of producing AM quality audio, which does not degrade over long distances. Digital shortwave receivers (which are not yet commercially available) can be programmed to lock on to a station name as opposed to a specific broadcasting frequency. This development could have major implications for countries such as China and Cuba, which actively jam current shortwave transmissions. Under a digital system, it may be possible to scramble frequencies to frustrate jammers while not affecting listeners, whose preset stations would be available at a touch of a button. However, the International Broadcasting Bureau noted that it is unclear whether these potential anti-jamming features will be available in mass-market products. Transnational AM (medium-wave) broadcasts - U.S.-owned or -leased transmission facilities, AM broadcasts can reach target audiences up to 900 miles away or even farther under some circumstances. One advantage of AM broadcasting is the enormous number of listeners with AM/FM receivers. As is the case with shortwave transmissions, one drawback of medium-wave transmissions is that they can be jammed by hostile governments. AM/FM Radio Affiliates -- Radio affiliates are local AM/FM or television stations that rebroadcast U.S.-produced program content. Some affiliates are paid to carry this content, and others are not. FM signals provide the highest sound quality, but they are limited to a line-of-sight broadcast range typically of about 25 to 75 miles depending on the height of the transmitting antenna and other local conditions. The Board currently has more than 1,300 radio affiliates, with the largest concentration of affiliates in Central Europe, the former Soviet Union, and Latin America. For example, the Board has 516 radio affiliates in Latin America. In contrast, it has only 54 radio affiliates in Africa. Paid leases and licenses are another form of local rebroadcasting. A lease is an agreement with a local station or network for a specific allocation of airtime for a specific cost. The Board currently has 24 AM/FM leases worldwide. Licenses are granted by a national authority to broadcasters for the use of a dedicated AM or FM frequency to broadcast locally using their own equipment. However, in most cases, national regulations require that the license be issued in the name of a local entity. According to a 1999 International Broadcasting Bureau document on transmission strategies, the Voice of America has traditionally placed its emphasis on building its network of AM/FM affiliates, while other international broadcasters, such as Radio Free Europe/Radio Liberty, the British Broadcasting Corporation, and Radio France International, have invested substantially in local leases and licenses. Television via Local Affiliates - is broadcast through local cable and land-based broadcast affiliates. According to Board officials, television has become the predominant media choice for viewers in several key areas, including Russia and China. The Board reports that it has almost 500 television cable/terrestrial affiliates concentrated in the former Soviet Union and Latin America. Television content for U.S. international broadcasting has traditionally been provided by the Worldnet Television and Film Service, which is the official television broadcast arm of the U.S. government. According to the Board, it has transferred the public diplomacy portion of Worldnet to the State Department under the Foreign Affairs Reform and Restructuring Act of 1998. (P.L. 105-277). The Board has submitted a reprogramming request to Congress to transfer Worldnet’s remaining resources (totaling $20.5 million in fiscal year 2000 funding) to Voice of America TV. Satellite Radio and Television - This medium relies on direct satellite transmission to relatively expensive analog or digital receivers or private satellite dishes. While not appropriate for reaching mass audiences, this option does offer the opportunity to reach “elite” listeners who are the key decisionmakers U.S. international broadcasters would like to reach in target countries. Internet Webcasting and E-mail Delivery – The Internet offers the first truly interactive medium for delivering text, audio, and video streams to users’ personal computers. The use of e-mail also provides broadcasters with the ability to send text messages to subscriber lists with the contents of U.S. audio broadcasts. U.S. broadcast entities have also established a presence on the Internet, and the Voice of America, Radio Free Europe/Radio Liberty, and Radio Free Asia have initiated e-mail subscriber programs. Again, the Internet is currently not poised to deliver information to mass audiences around the globe; however, it represents another key delivery option for reaching elite listeners. While Internet webcasting is not susceptible to jamming, it is susceptible to blocking at entry portals by hostile governments. Table 4 provides a brief overview of the criteria and related processes used to support the Board’s language service review process. Audience listening rate is the key variable used to assess the impact a language service is having. However, the Board used additional impact criteria, such as program quality and transmission effectiveness, to help identify potential solutions to low audience listening rates. The British Broadcasting Corporation’s (BBC) World Service has adopted a model for international broadcasting that differs in several key respects from the approach U.S. broadcasters use. Three of the most significant differences between the Board and the BBC are mission, organizational structure, and future operations. The central mission of U.S. international broadcasting is geared toward reaching audiences that are underserved by available media voices. As a result, the United States does not broadcast to fully democratic nations such as Canada, the United Kingdom, or Germany. In contrast, the BBC’s mission is much broader and includes reaching listeners in markets around the world, including media-rich countries such as the United States. The organization of U.S. international broadcasting has evolved along the lines of “official” and “surrogate” broadcast entities. This division has led to the creation of five separate broadcast entities with varying missions, budget resources, and operating styles. The BBC has only one World Service, which, according to BBC officials, varies broadcast content on a country-by-country basis in response to market research and audience demands. Finally, U.S. international broadcasting and certain component operations are either subject to sunset provisions or are required to phase out over a period of time. In contrast, the World Service is not subject to sunset. In the case of U.S. international broadcasting, an original sunset provision in the 1994 International Broadcasting Act generally required the Board to cease funding Radio Free Asia after September 30, 1998. The act was amended in 1999 to provide for explicit sunset of funding for Radio Free Asia after September 30, 2009. Congress has also specified conditions under which Radio Free Europe/Radio Liberty broadcasting should be phased out in a particular country. Radio TV/Marti is required to be terminated upon transmittal by the President to appropriate congressional Committees of a determination that a democratically elected government is in power in Cuba. Even the Voice of America’s goal to serve audiences deprived of full access to an open and free press suggests a diminishing role over time as the long-sought goal of global press freedom is eventually achieved. Information on U.S. international broadcasting and BBC World Service operations is provided in table 5. The table is designed to provide summary data on U.S. and BBC broadcast operations and the table notes should be read carefully to understand the data on total budget costs, listening audience, and number of language services. This numerical data is not sufficient to draw conclusions about the relative efficiency and effectiveness of the two organizations. Additional factors such as the relative costs of reaching different target audiences, the different mixes of broadcast technology, and the nature of operating overheads would need to considered to arrive at valid conclusions. The Chairman of the House Committee on the Budget requested that we examine whether the U.S. Broadcasting Board of Governors (1) responded to the specific mandates regarding Radio Free Europe/Radio Liberty’s operations, (2) implemented an annual language service review process, and (3) instituted a strategic planning and performance management system. He also asked us to provide information on U.S. international Broadcasting and British Broadcasting Corporation operations. To assess whether the Board has responded to the specific cost-cutting mandates and expectations established in the 1994 International Broadcasting Act, we examined the Board’s transmission consolidation efforts, the history of consolidation activities in connection with Radio Free Europe/Radio Liberty’s move from Munich to Prague, the Board’s efforts to privatize Radio Free Europe/Radio Liberty’s operations by fiscal year 1999, and the Board’s efforts to adopt digital production technology for each broadcast entity. We met with Board, International Broadcasting Bureau, Voice of America, Worldnet Television and Film Service, Radio Free Europe/Radio Liberty, and Radio Free Asia senior officials in Washington, D.C., to discuss these issues and review applicable documentation. This documentation included the Board’s report on Congress’s earlier mandate to privatize Radio Free Europe/Radio Liberty’s operations and additional documentation on the Board’s transmission consolidation efforts, the relocation from Munich to Prague, and the Digital Broadcasting Program being implemented by the International Broadcasting Bureau on behalf of the Voice of America. We also met with Radio/TV Marti officials in Miami, Florida, and Radio Free Europe/Radio Liberty officials in Prague to review their respective streamlining and cost- cutting activities. To assess whether the Board implemented a language service review process, we met with International Broadcasting Bureau planning staff in Washington, D.C., to determine the process, evaluation criteria, and outcome of this year’s language service review. We reviewed the Board’s February 2000 reports on this process and the linkage between these documents and the Board’s reallocation decisions. To assess whether the Board has instituted a strategic planning and performance management system, we obtained and reviewed copies of all relevant Results Act planning documents, including the Board’s 5-year strategic plan dated December 1997; annual performance plans for fiscal years 1999, 2000, and 2001; and the Board’s March 2000 annual performance report. We compared the Board’s fiscal year 2001 performance plan against GAO’s guide for evaluating agency annual performance plans. We also met with Board staff to discuss the Board’s latest efforts to update its Results Act planning documents. In order to prepare a comparison of Board and BBC World Service operations, we interviewed BBC officials in London and collected and analyzed relevant documents including World Service strategic plans, marketing and audience research information, and data relating to the BBC’s performance management system. We conducted our review from December 1999 to August 2000 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Broadcasting Board of Governors’ letter dated September 13, 2000. 1. We agree that U.S. international broadcasters and the BBC World Service have different roles. However, the fact that U.S. international broadcasters have multiple and more complex missions does not obviate the value of examining the BBC’s operations relative to U.S. international broadcasting. The Board acknowledges the value of tracking and evaluating the activities of competitors by maintaining an on-line database to capture this information. The database includes country-by-country audience data that shows how U.S. international broadcasters are doing relative to other major international broadcasters, with a particular focus on the BBC. The Board’s database also summarizes this information into seven regional groups to help identify broader performance trends. For example, with regard to the 35 countries in Africa targeted by the Voice of America and the BBC, the Board’s database shows that the BBC has a higher audience share than the Voice of America in 25 countries, the Voice of America has a higher audience share in 8 countries, and the two organizations are tied for listeners in two countries. 2. The number of language services shown in table 5 in appendix IV is footnoted to indicate that 24 of the U.S. language services are duplicate language services run by the Voice of America and surrogate broadcasters. We revised the applicable table note to point out that many of the Board’s language services have been mandated by Congress. 3. We revised the table to show a total funding figure of $367 million for U.S. international broadcasting. This figure was calculated by deducting $53 million in television production and transmission costs from a total U.S. funding figure of $420 million for fiscal year 2000. We made this change to reflect that the BBC funding figure does not include television costs. 4. We agree that simply dividing the number of total listeners by total broadcast costs does not provide meaningful comparative information in the absence of a more detailed understanding of why costs differ between the two organizations. Explanatory factors might include the relative costs of reaching different target audiences, different mixes of broadcast technology, and the relative efficiency and effectiveness of each organization. We revised the introduction to table 5 to emphasize that our table is designed to provide summary data on U.S. and BBC broadcast operations. We also incorporated the Board’s concern that readers should avoid making a cost-per-listener comparison between U.S. and BBC international broadcasting. In addition to those named above, Michael ten Kate, Wyley Neal, Ernie Jackson, and Rona Mendelsohn made key contributions to this report. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system) | Pursuant to a congressional request, GAO examined whether the Broadcasting Board of Governors: broadcasters (1) responded to the specific limitations and cost-cutting expectations regarding Radio Free Europe/Radio Liberty's operations; (2) implemented an annual language service review process; and (3) instituted a strategic planning and performance management system. GAO noted that: (1) the Board met its mandates under the 1994 U.S. International Broadcasting Act to reduce Radio Free Europe/Radio Liberty's annual budget by lowering its budget from $208 million in fiscal year (FY) 1994 to approximately $71 million in FY 1996; (2) it did this by taking several actions including relocating its operation from Munich, Germany, to Prague, Czech Republic and significantly reducing staff; (3) additional savings were made by: (a) eliminating several hundred hours of broadcast overlap; (b) eliminating and modifying a limited number of language services; (c) consolidating transmission operations under the International Broadcasting Bureau; and (d) deploying digital sound recording and editing technology, which has increased Radio Free Europe/Radio Liberty's staff efficiency and effectiveness; (4) the Board completed a comprehensive language service review in January 2000 that sought to systematically evaluate U.S. international broadcast priorities and program impact; (5) the Board intends to use this information to strategically reallocate approximately $4.5 million in language service funds from emerging democracies in Central and Eastern Europe to several African countries and selected countries in other regions; (6) according to the Board, it intends to continue to use the annual language service review process to strategically analyze broadcast priorities, program funding, and resource allocations; (7) the Board has not yet established an effective strategic planning and performance management system that incorporates Government Performance and Results Act planning, the annual language service review process, and the program reviews of individual language services conducted by the International Broadcasting Bureau and the surrogate broadcasters; (8) the Board's FY 2001 performance plan is deficient because of missing or imprecise performance goals or indicators and a lack of key implementation strategies and related resource requirements that detail the key issues facing the Board; (9) the Board has not established a standard program review approach, which would help ensure that consistent and meaningful measures of program quality are developed across broadcast entities; and (10) it has also not incorporated specific audience size and composition targets into the program review process, which would help ensure that program reviews culminate in a written report that identifies the specific actions needed to achieve agreed-upon performance goals. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Under MD-715, federal agencies are to identify and eliminate barriers that impede free and open competition in their workplaces. EEOC defines a barrier as an agency policy, principle, or practice that limits or tends to limit employment opportunities for members of a particular gender, race, ethnic background, or disability status. According to EEOC’s instructions, many employment barriers are built into the organizational and operational structures of an agency and are embedded in the day-to-day procedures and practices of the agency. In its oversight role under MD- 715, EEOC provides instructions to agencies on how to complete their barrier analyses and offers other informal assistance. Based on agency submissions of MD-715 reports, EEOC provides assessments of agency progress in its Annual Report on the Federal Workforce, feedback letters addressed to individual agencies, and the EEO Program Compliance Assessment (EPCA). At DHS, the Officer for CRCL, through the Deputy Officer for EEO Programs, is responsible for processing complaints of discrimination; establishing and maintaining EEO programs; fulfilling reporting requirements as required by law, regulation, or executive order; and evaluating the effectiveness of EEO programs throughout DHS. Consistent with these responsibilities, the Officer for CRCL, through the Deputy Officer for EEO Programs, is responsible for preparing and submitting DHS’s annual MD-715 report. In addition, the Deputy Officer for EEO Programs and the Under Secretary for Management (USM) are also responsible for diversity management at DHS. Under the USM, the Chief Human Capital Officer is responsible for diversity management and has assigned these duties to the Executive Director of Human Resources Management and Services. According to CRCL’s Deputy Officer for EEO Programs, CRCL and OCHCO collaborate on a number of EEO and diversity activities through participation in work groups, involvement in major projects, policy and report review, and participation on the Diversity Council and its Diversity Policy and Planning Subcouncil. Figure 1 shows the officials who are primarily responsible for EEO and diversity management at DHS. The DHS Diversity Council is composed of the members of the DHS Management Council, which is chaired by the USM and includes component representatives—generally a component’s equivalent of a chief management officer or chief of staff. The Diversity Council charter gives the DHS Management Council the responsibility of meeting as the Diversity Council at least bimontly. CRCL’s Deputy Officer for EEO Programs and OCHCO’s Executive Director of Human Resources Management and Services chair the Diversity Council’s Policy and Planning Subcouncil, which includes at least one member from each DHS component represented on the Management Council. The Diversity Policy and Planning Subcouncil meets every 2 weeks and is to identify, research, and analyze workforce diversity issues, challenges, and opportunities and report and make recommendations to the Diversity Council on DHS diversity strategies and priorities. According to EEOC’s MD-715 instructions, barrier identification is a two- part process. First, using a variety of sources, an agency is to identify triggers. Second, the agency is to investigate and pinpoint actual barriers and their causes. According to EEOC officials, this should be an ongoing process. Figure 2 shows the barrier identification steps under MD-715. Our review of DHS’s MD-715 reports for each of the fiscal years 2004 through 2007 showed that in 2004 DHS identified 14 triggers, which were present in each subsequent year. According to DHS’s MD-715 reports, DHS identified 13 of the 14 triggers based on its analysis of participation rates contained in the workforce data tables. The remaining trigger— incomplete accessibility studies on all facilities—was identified based on responses to the self-assessment checklist contained in the MD-715 form and comments made at disability awareness training for managers. In addition, in 2008, DHS identified one new trigger based on a joint statement from EEOC, the Department of Justice, and the Department of Labor related to heightened incidents of harassment, discrimination, and violence in the workplace against individuals who are or are perceived to be Arab, Muslim, Middle Eastern, South Asian, or Sikh. Table 1 shows a summary of DHS-identified triggers and the sources of information from which they were identified. To identify triggers, agencies are to prepare and analyze workforce data tables comparing participation rates to designated benchmarks (such as representation in the civilian labor force (CLF) or the agency’s total workforce) by gender, race, ethnicity, or disability status in various subsets of their workforces (such as by grade level or major occupations and among new hires, separations, promotions, and career development programs). According to EEOC’s MD-715 instructions, participation rates below a designated benchmark for a particular group are triggers. Along with the workforce data tables, according to EEOC’s MD-715 instructions, agencies are to regularly consult additional sources of information to identify areas where barriers may operate to exclude certain groups. Other sources of information include, but are not limited to EEO complaints and EEO-related grievances filed; findings of discrimination on EEO complaints; surveys of employees on workplace environment issues; exit interview results; surveys of human resource program staff, managers, EEO program staff, counselors, investigators, and selective placement coordinators; input from agency employee and advocacy groups and union officials; available government reports (i.e., those of EEOC, GAO, OPM, the Merit Systems Protection Board, and the Department of Labor); and local and national news reports. EEOC officials said that these sources may reveal triggers that may not be present in the workforce data tables. Several of the above-listed sources provide direct employee input on employee perceptions of the effect of agency policies and procedures. For example, according to EEOC instructions, employee surveys may reveal information on experiences with, perceptions of, or difficulties with a practice or policy within the agency. Further, EEOC’s instructions state that reliance solely on workforce profiles and statistics will not meet the mandate of MD-715. When workforce data and other sources of information indicate that a barrier may exist, agencies are to conduct further inquiry to identify and examine the factors that caused the situation revealed by workforce data or other sources of information. To identify triggers, CRCL stated that it regularly reviews complaint data it must submit annually to EEOC and data collected from reports CRCL is required to submit under various statutes, executive orders, and initiatives, including the Notification and Federal Employee Antidiscrimination and Retaliation Act, Federal Equal Employment Opportunity Recruitment Program, Executive Order 13171 on Hispanic employment in the federal government, Disabled Veterans Affirmative Action Program, White House Initiative on Historically Black Colleges and Universities, and White House Initiative on Tribal Colleges and Universities. According to CRCL officials, in the past, CRCL has also relied upon the DHS online departmental newsletter, periodicals, and news media to identify triggers. We have previously reported that successful organizations empower and involve their employees to gain insights about operations from a frontline perspective, increase their understanding and acceptance of organizational goals and objectives, and improve motivation and morale. Obtaining the input of employees in identifying triggers would provide a frontline perspective on where potential barriers exist. Employee input can come from a number of sources including employee groups, exit interviews, and employee surveys. CRCL said that it does not consider input from employee groups in conducting its MD-715 analysis, but the Diversity Council’s Diversity Policy and Planning Subcouncil has recently begun to reach out to form partnerships with employee associations such as the National Association of African-Americans in the Department of Homeland Security. In addition, according to DHS’s 2008 MD-715 report, DHS does not currently have a departmentwide exit survey, but according to a senior OCHCO official, OCHCO plans to develop a prototype exit survey with the eventual goal of proposing its use throughout DHS. Although DHS does not have the structures in place to obtain employee input departmentwide from employee groups and exit surveys, DHS could use the FHCS and DHS’s internal employee survey to obtain employee input in identifying potential barriers. OPM administers the FHCS biennially in even-numbered years, and DHS administers its own internal survey in off years. Both surveys collect data on employees’ perceptions of workforce management, organizational accomplishments, agency goals, leadership, and communication. We have previously reported that disaggregating employee survey data in meaningful ways can help track organizational priorities.According to information from officials in OPM’s Division for Strategic Human Resources Policy, which administers and analyzes the FHCS, results by gender, national origin, and race are available at the agency level (i.e., DHS) on each agency’s secure site. DHS’s internal survey also collects demographic data on race, gender, and national origin of respondents. DHS could analyze responses from the FHCS and its internal employee survey by race, gender, and national origin to determine whether employees of these groups perceive a personnel policy or practice as a possible barrier. For example, one question on the 2008 FHCS asked whether supervisors or team leaders in the employee’s work unit support employee development. Fifty-eight percent of DHS respondents agreed and 21 percent disagreed with the statement. The 2007 DHS internal survey asked whether employees receive timely information about employee development programs. Thirty-nine percent of respondents provided a positive response; 35 percent provided a negative response. Although a CRCL staff member reviews the FHCS and DHS’s internal survey data as part of an OCHCO employee engagement working group, the staff member does not review DHS responses based on race, gender, and national origin. Responses based on demographic group could indicate whether a particular group perceives a lack of opportunity for employee development and suggest a need to further examine these areas to determine if barriers exist. Without employee input on DHS personnel policies and practices, DHS is missing opportunities to identify potential barriers. Regular employee input could help DHS to identify potential barriers and enhance its efforts to acquire, develop, motivate, and retain talent that reflects all segments of society and our nation’s diversity. In fiscal year 2007, DHS conducted its first departmentwide barrier analysis. This effort involved further analysis of the triggers initially identified in 2004 to determine if there were actual barriers and their causes. According to DHS’s 2007 MD-715 report, DHS limited its barrier analysis to an examination of policies and management practices and procedures that were in place during fiscal year 2004. Therefore, according to the report, policies, procedures, and practices that were established or used after fiscal year 2004 were outside the scope of this initial barrier analysis. Based on triggers DHS identified in 2004, DHS’s departmentwide barrier analysis identified the following four barriers: 1. Overreliance on the Internet to recruit applicants. 2. Overreliance on noncompetitive hiring authorities. 3. Adequacy of responses to Executive Order 13171, Hispanic Employment in the Federal Government; specifically, in several components that there was no evidence of specific recruitment initiatives that were directed at Hispanics. 4. Nondiverse interview panels; specifically, interview panels that did not reflect the diversity of applicants. According to EEOC guidance, barrier elimination is vital to achieving the common goal of making the federal government a model employer. Once an agency identifies a likely factor (or combination of factors) adversely affecting the employment opportunities of a particular group, it must decide how to respond. Barrier elimination is the process by which an agency removes barriers to equal participation at all levels of its workforce. EEOC’s instructions provide that in MD-715 reports, agencies are to articulate objectives accompanied by specific action plans and planned activities that the agency will take to eliminate or modify barriers to EEO. Each action item must set a completion date and identify the one high-level agency official who is responsible for ensuring that the action item is completed on time. In addition, according to EEOC’s instructions, agencies are to continuously monitor and adjust their action plans to ensure the effectiveness of the plans themselves, both in goal and execution. This will serve to determine the effectiveness of the action plan and objectives. Figure 3 shows the barrier elimination and assessment steps under MD-715. Our analysis of DHS’s MD-715 2007 and 2008 reports showed DHS articulated 12 different planned activities to address the identified barriers, including 1 new planned activity in 2008. Of the 12 planned activities, 2 relate to recruitment practices and strategies, specifically implementing a departmentwide recruitment strategy and targeting recruitment where there are low participation rates. Two other planned activities relate to the development of additional guidance, specifically on composition of interview panels and increasing educational opportunities. For each barrier, DHS identifies at least one planned activity—eight in total— related to collecting and analyzing additional data. According to DHS’s 2007 and 2008 MD-715 reports, DHS’s primary objective is to capture and analyze the additional data needed to link the barriers to the relevant triggers. In addition, of the 12 different planned activities, 5 involve collaboration between CRCL and OCHCO. One planned activity to address overreliance on the use of the Internet to recruit applicants calls for the development of an applicant flow tool to gather data on applicants, which would enable CRCL and OCHCO to analyze recruitment and hiring results. According to CRCL, its staff collaborate with OCHCO by evaluating and providing feedback on development of the tool. We have previously reported on the benefits of coordination and collaboration between the EEO and the human capital offices within agencies. During our previous work reviewing coordination of federal workplace EEO, an EEOC official commented that a review of barrier analyses in reports submitted under MD-715 showed that the highest-quality analyses had come from agencies where there was more coordination between staff of the human capital and EEO offices. Table 2 shows DHS’s planned activities, the identified barriers to which they relate, and the target completion dates. For the planned activities identified in its 2007 MD-715 report, DHS has modified the target date for all but one of them. As reported in the 2008 MD-715 report, the original target completion dates have been delayed anywhere from 12 to 21 months. In addition, since DHS filed its 2008 MD- 715 report, DHS modified one of the target dates it had previously modified in its 2008 report. DHS has not completed any of the planned activities articulated in its 2007 and 2008 MD-715 reports. According to CRCL officials, although it has not completed any planned activities to address identified barriers, DHS has completed some planned activities identified in fiscal years 2007 and 2008 related to improving its EEO program. According to CRCL, DHS modified target dates primarily because of staffing shortages in both CRCL and OCHCO, including the retirement in 2008 of three senior CRCL officials (including the Deputy Officer for EEO Programs) and extended absences of the remaining two staff. In addition, according to senior officials, during fiscal year 2008, OCHCO experienced significant staff shortages and budgetary issues and lost its contract support. According to the Deputy Officer for EEO Programs, fiscal year 2009 is a rebuilding year. CRCL is adding five new positions, in addition to the existing three, to the CRCL unit responsible for preparing and submitting DHS’s MD-715 reports and implementing MD-715 planned activities. According to CRCL, once it is fully staffed, it will be able to expand services and operations. DHS has not established interim milestones for the completion of planned activities to address barriers. According to DHS officials, its MD-715 reports and Human Capital Strategic Plan represent the extent of DHS project plans and milestones for completing planned activities. These documents include only the anticipated outcome, not the essential activities needed to achieve the outcome. For example, in DHS’s 2007 and 2008 MD-715 reports, CRCL identifies an applicant flow tool to analyze recruitment and hiring results as a planned activity to address the barrier of overreliance on the use of the Internet to recruit applicants. DHS’s Human Capital Strategic Plan also identifies an applicant flow tool to analyze recruitment and hiring results as an action to achieve its departmentwide diversity goal. DHS does not articulate interim steps, with milestones, to achieve this outcome in either document. In order to help ensure that agency programs are effectively and efficiently implemented, it is important that agencies implement effective internal control activities. These activities help ensure that management directives are carried out. We have previously reported that it is essential to establish and track implementation goals and establish a timeline to pinpoint performance shortfalls and gaps and suggest midcourse corrections. Further, it is helpful to focus on critical phases and the essential activities that need to be completed by a given date. In addition, we recommended in our 2005 report on DHS’s management integration that DHS develop a management integration strategy. Such a strategy would include, among other things, clearly identifying the critical links that must occur among initiatives and setting implementation goals and a timeline to monitor the progress of these initiatives and to ensure that the necessary links occur. Identifying the critical phases of each planned activity necessary to achieve the intended outcome with interim milestones could help DHS ensure that its efforts are moving forward and manage any needed midcourse corrections, while minimizing modifications of target completion dates. According to CRCL and OCHCO officials, DHS is making progress on initiatives relating to (1) outreach and recruitment, (2) employee engagement, and (3) accountability. DHS’s Executive Director of Human Resources Management and Services told us that DHS is currently implementing a targeted recruitment strategy based on representation levels, which includes attending career fairs and entering into partnerships with organizations such as the Black Executive Exchange Program. CRCL officials also said that CRCL staff participate on the Corporate Recruitment Council, which meets each month and includes recruiters from each of the components. In addition, according to the Human Capital Strategic Plan diversity goal, DHS plans to establish a diversity advisory network of external stakeholders. According to CRCL, this effort includes specific outreach and partnership activities with such groups as the National Association for the Advancement of Colored People, Blacks in Government, League of United Latin American Citizens, Organization of Chinese Americans, Federal Asian Pacific American Council, Federally Employed Women, National Organization of Black Law Enforcement Executives, and Women in Federal Law Enforcement. DHS has also reported progress on employee engagement efforts. The Executive Director of Human Resources Management and Services also told us that DHS is in the planning stages of forming a department-level employee council comprising representatives from each diversity network at each of DHS’s components. In addition, according to DHS’s Human Capital Strategic Plan, DHS will incorporate questions into its internal employee survey specifically addressing leadership and diversity. The planned completion for this effort is the first quarter of fiscal year 2010. To address accountability, the Executive Director of Human Resources Management and Services said that DHS added a Diversity Advocate core competency as part of DHS’s fiscal year 2008 rating cycle for Senior Executive Service (SES) performance evaluations. Under DHS’s SES pay- for-performance appraisal system, ratings on this and other core competencies affect SES bonuses and pay increases. According to DHS’s Competency Illustrative Guidance, the standard provides for each senior executive to promote workforce diversity, provide fair and equitable recognition and equal opportunity, and promptly and appropriately address allegations of harassment or discrimination. According to the Executive Director of Human Resources Management and Services, OCHCO is currently developing plans, with the participation of CRCL, to implement a similar competency in 2010 for managers and supervisors, although the specific details on implementation are not yet finalized. According to MD-715 and its implementing guidance, a parent agency is to ensure that its components implement the provisions of MD-715 and make a good faith effort to identify and remove barriers to equality of opportunity in the workplace. Among other requirements, the parent agency is responsible for ensuring that its reporting components—those that are required to submit their own MD-715 reports—complete those reports. The parent agency is also responsible for integrating the components’ MD-715 reports into a departmentwide MD-715 report. According to officials from EEOC’s Office of Federal Operations, how a department oversees and manages this process is at the discretion of the department. In addition, to ensure management accountability, the agency, according to MD-715, should conduct regular internal audits, at least annually, to assess, among other issues, whether the agency has made a good faith effort to identify and remove barriers to equality of opportunity in the workplace. At DHS, according to the DHS Acting Officer for CRCL and the Deputy Officer for EEO Programs, component EEO directors do not report directly to CRCL but to their respective component heads. While this EEO organizational structure is similar to other cross-cutting lines of business (LOB), other cross-cutting LOBs have indirect reporting relationships, established through management directives, between the component LOB head and the DHS LOB chief for both daily work and annual evaluation. In contrast, the Deputy Officer for EEO Programs stated that he relies on a collaborative relationship with the EEO directors of the components to carry out his responsibilities. According to the Deputy Officer for EEO Programs, component EEO programs have supported department-wide initiatives when asked to join such efforts. On February 4, 2008, the Secretary of Homeland Security delegated authority to the Officer for CRCL to integrate and manage the DHS EEO Program, and currently a management directive interpreting the scope of this authority is awaiting approval. The Deputy Officer for EEO Programs stated that until the management directive is approved and implemented, the actual effect of the delegated authority is unclear. Lacking direct authority, the Deputy Officer stated that he relies on a collaborative relationship with the EEO directors of the components to carry out his responsibilities. According to the Deputy Officer for EEO Programs, one means of collaboration with the components is through the EEO Council, which meets monthly and is chaired by the Deputy Officer for EEO Programs and is composed of the EEO directors from each component. The Deputy Officer for EEO Programs said that he uses the EEO Council to share best practices, enhance cooperation, and enforce accountability. To assist the components in their MD-715 analyses, according to CRCL officials, CRCL prepares the workforce data tables for each of the components required to submit its own MD-715 report. CRCL obtains the data from OCHCO and sends them to a contractor to create the workforce data tables. According to CRCL officials, DHS is pursuing an automated information management system that will allow CRCL to conduct in-house centralized workforce data analysis at the component level. To ensure timely submissions of component MD-715 reports, DHS’s CRCL sets internal deadlines by which reporting components are to submit their final MD-715 reports. CRCL instructs the components to follow EEOC guidance in completing their reports. CRCL also gives components the option of submitting a draft report for CRCL to review and provide technical guidance on before the final report is submitted. For those components that have submitted draft reports, CRCL has provided written comments that could be incorporated into the components’ final reports. A CRCL official told us that for fiscal year 2009 draft submissions, CRCL will continue this practice and encourage components to submit draft reports. Since DHS was formed in 2003, CRCL has completed a full EEO program evaluation of the Federal Law Enforcement Training Center (FLETC) in fiscal year 2007, which focused on FLETC’s EEO Office’s operations and activities. In fiscal year 2008, CRCL conducted the audit work on a full program evaluation of the Federal Emergency Management Agency’s Equal Rights Office’s operations and activities, but to date CRCL has not issued the audit report. In fiscal year 2006, CRCL conducted a partial evaluation of the Transportation Security Administration’s Office for Civil Rights, which focused on EEO counseling, complaint tracking, and alternative dispute resolution. In addition, in fiscal year 2009, a contractor issued a report describing the findings of a program review of the U.S. Coast Guard’s Office of Civil Rights. The Deputy Officer for EEO Programs told us that CRCL intends to conduct program reviews of the EEO programs at all operational components by 2010, although no schedule for completing these audits has been established. Input from employee groups reflects the perspective of the individuals directly affected by employment policies and procedures and could provide valuable insight into whether those policies and procedures may be barriers to EEO. Because CRCL does not regularly include employee input from available sources, such as the FHCS and DHS’s internal employee survey, it is missing opportunities to identify potential barriers to EEO. For barriers DHS has already identified, it is important for DHS to ensure the completion of planned activities through effective internal control activities, including the identification of critical schedules and milestones that need to be completed by a given date. Effective internal controls could help DHS ensure that its efforts are moving forward, manage any needed midcourse corrections, and minimize modifications of target completion dates. Additional staff, which DHS plans to add in 2009, could help DHS implement effective internal control activities. We recommend that the Secretary of Homeland Security take the following two actions: Direct the Officer for CRCL to develop a strategy to regularly include employee input from such sources as the FHCS and DHS’s internal survey in identifying potential barriers to EEO. Direct the Officer for CRCL and the CHCO to identify essential activities and establish interim milestones necessary for the completion of all planned activities to address identified barriers to EEO. We provided a draft of this report to the Secretary of Homeland Security for review and comment. In written comments, which are reprinted in appendix I, the Director of DHS’s Departmental GAO/OIG Liaison Office agreed with our recommendations. Regarding the first recommendation, the Director agreed that DHS should develop a departmentwide strategy to regularly include employee input from the FHCS and DHS internal employee survey to identify barriers, but noted that DHS component EEO programs already use employee survey data to develop annual action plans to address identified management issues. Regarding the second recommendation, the Director wrote that CRCL has already begun revising its plans to identify specific steps and interim milestones to accomplish the essential activities. DHS also provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Homeland Security and other interested parties. The report also will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, Belva Martin, Acting Director; Amber Edwards; Karin Fangman; Melanie H. Papasian; Tamara F. Stenzel; and Greg Wilmoth made key contributions to this report. | Under MD-715, federal agencies are to identify and eliminate barriers that impede free and open competition in their workplaces. EEOC defines a barrier as an agency policy, principle, or practice that limits or tends to limit employment opportunities for members of a particular gender, race, ethnic background, or disability status. According to EEOC's instructions, many employment barriers are built into the organizational and operational structures of an agency and are embedded in the day-to-day procedures and practices of the agency. In its oversight role under MD-715, EEOC provides instructions to agencies on how to complete their barrier analyses and offers other informal assistance. Based on agency submissions of MD-715 reports, EEOC provides assessments of agency progress in its Annual Report on the Federal Workforce, feedback letters addressed to individual agencies, and the EEO Program Compliance Assessment (EPCA). DHS has generally relied on workforce data and has not regularly included employee input from available sources to identify "triggers," the term EEOC uses for indicators of potential barriers. GAO's analysis of DHS's MD-715 reports showed that DHS generally relied on workforce data to identify 13 of 15 triggers, such as promotion and separation rates. According to EEOC, in addition to workforce data, agencies are to regularly consult a variety of sources, such as exit interviews, employee groups, and employee surveys, to identify triggers. Involving employees helps to incorporate insights about operations from a frontline perspective in determining where potential barriers exist. DHS does not consider employee input from such sources as employee groups, exit interviews, and employee surveys in conducting its MD-715 analysis. Data from the governmentwide employee survey and DHS's internal employee survey are available, but DHS does not use these data to identify triggers. By not considering employee input on DHS personnel policies and practices, DHS is missing opportunities to identify potential barriers. Once a trigger is revealed, agencies are to investigate and pinpoint actual barriers and their causes. In 2007, through its departmentwide barrier analysis, DHS identified four barriers: (1) overreliance on the Internet to recruit applicants, (2) overreliance on noncompetitive hiring authorities, (3) lack of recruitment initiatives that were directed at Hispanics in several components, and (4) nondiverse interview panels. GAO's analysis of DHS's 2007 and 2008 MD-715 reports showed that DHS has articulated planned activities to address identified barriers, has modified nearly all of its original target completion dates by a range of 12 to 21 months, and has not completed any planned activities; although officials reported completing other activities in fiscal year 2007 and 2008 associated with its EEO program. Nearly half of the planned activities involve collaboration between the civil rights and human capital offices. DHS said that it modified the dates because of staffing shortages. In order to ensure that agency programs are effectively and efficiently implemented, it is important for agencies to implement internal control activities, such as establishing and tracking implementation goals with timelines. This allows agencies to pinpoint performance shortfalls and gaps and suggest midcourse corrections. DHS has not developed project plans with milestones beyond what is included in its MD-715 report and its Human Capital Strategic Plan. These documents include only the anticipated outcomes and target completion dates, not the essential activities needed to achieve the outcome. Identifying the critical phases of each planned activity necessary to achieve the intended outcome with interim milestones could help DHS ensure that its efforts are moving forward and manage any needed midcourse corrections, while minimizing modification of target dates. DHS uses a variety of means to oversee and support components, including providing written feedback on draft reports to components that are required to prepare their own MD-715 reports, conducting program audits, and convening a council of EEO directors from each of the components. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
In January 1999, INS issued its Interior Enforcement Strategy. This strategy focused resources on areas that would have the greatest impact on reducing the size and annual growth of the illegal resident population. Certain criteria were used to develop the priorities and activities of the strategy. The criteria focused on potential risks to U.S. communities and persons, costs, capacity to be effective, impact on communities, potential impact on reducing the size of the problem, and potential value for prevention and deterrence. The strategy established the following five areas in priority order: 1. Identify and remove criminal aliens and minimize recidivism. Under this strategic priority, INS was to identify and remove criminal aliens as they come out of the federal and state prison systems and those convicted of aggravated felonies currently in probation and parole status. 2. Deter, dismantle, and diminish smuggling or trafficking of aliens. This strategic priority called for INS to disrupt and dismantle the criminal infrastructure that encourages and benefits from illegal migration. INS efforts were to start in source and transit countries and continue inside the United States, focusing on smugglers, counterfeit document producers, transporters, and employers who exploit and benefit from illegal migration. 3. Respond to community reports and complaints about illegal immigration. In addition to responding to local law enforcement issues and needs, this strategic priority emphasizes working with local communities to identify and address problems that arise from the impact of illegal immigration, based on local threat assessments. 4. Minimize immigration benefit fraud and other document abuse. Under this strategic priority, INS was to aggressively investigate and prosecute benefit fraud and document abuse to promote integrity of the legal immigration system. 5. Block and remove employers’ access to undocumented workers. The strategy emphasizes denying employers access to unauthorized workers by checking their compliance with the employment verification requirements in the Immigration Reform and Control Act of 1986. Coupled with its efforts to control smuggling activity, this effort could have a multiplier effect on access of employers to illegal workers and on the overall number of illegal residents in the country. Figure 1 shows that INS had generally allocated its interior enforcement resources consistent with these priorities and that the workyears devoted to several of INS’s interior enforcement efforts had either declined or stayed about the same between fiscal years 1998 and 2002. Our work has shown that INS faced numerous daunting enforcement issues, as will BICE as it assumes responsibility for the strategy. For example, the potential pool of removable criminal aliens and fugitives numbers in the hundreds of thousands. Many are incarcerated in hundreds of federal, state, and local facilities, while others are fugitives at large across the country. The number of individuals smuggled into the United States has increased dramatically, and alien smuggling has become more sophisticated, complex, organized, and flexible. Thousands of aliens annually illegally seek immigration benefits, such as work authorization and change of status, and some of these aliens use these benefits to enable them to conduct criminal activities. Hundreds of thousands of aliens unauthorized to work in the United States have used fraudulent documents to circumvent the process designed to prevent employers from hiring them. In many instances, employers are complicit in this activity. Given the nature, scope, and magnitude of these activities, BICE needs to ensure that it is making the best use of its limited enforcement resources. We found that fundamental management challenges exist in several of the interior enforcement programs and that addressing them will require the high-level attention and concerted efforts of BICE. In several reports we noted that INS did not believe it had sufficient staff to reach its program goals. Having data on how to effectively allocate staff and placing sufficient staff in the right locations is important if BICE is to achieve program goals. Staff shortages had contributed to INS’s inability to promptly remove the majority of criminal aliens after they have completed their prison sentences. In 1997 INS did not place into removal proceedings 50 percent of potentially deportable criminal aliens who were released from federal prisons and state prisons from 5 states. In 1999 we reported that, although the removal of criminal aliens was an INS management priority, INS faced the same staff shortage issues in 1997 as it had in 1995. In particular, agent attrition – about one-third of the workforce - continued to impede INS’s ability to meet its program goals. INS had told us that since 1997, the attrition rates of agents in this program has stabilized and that, in fiscal year 2003, the agents from this program would be reclassified as detention removal officers, which INS believed should further help reduce attrition. Even if INS had additional staff working in these program areas, it lacked good management information to determine how many staff it needed to meet its program goals and how best to allocate staff given the limited resources it did have. With respect to its program for removing incarcerated criminal aliens, INS told us that beginning in fiscal year 2002, the agency implemented our recommendation to use a workload analysis model. This was to help identify the resources the agency needed for its criminal alien program in order to achieve overall program goals and support its funding and staffing requests. We have not reviewed this new model to ascertain its usefulness. With respect to alien smuggling, INS lacked field intelligence staff to collect and analyze information. Both 1998 and 1999 INS Annual Performance Plan reports stated that the lack of intelligence personnel hampered the collection, reporting, and analysis of intelligence information. Although INS’s Intelligence Program proposed that each district office have an intelligence unit, as of January 2000, 21 of INS’s 33 districts did not have anyone assigned full-time to intelligence-related duties. Our ongoing work at land ports of entry shows this to be a continuing problem. The worksite enforcement program received a relatively small portion of INS’s staffing and budget. In fiscal year 1998, INS completed a total of 6,500 worksite investigations, which equated to about 3 percent of the estimated number of employers of unauthorized aliens. Given limited enforcement resources, BICE needs to assure that it targets those industries where employment of illegal aliens poses the greatest potential risk to national security. The program now has several initiatives underway that target sensitive industries. INS had long-standing difficulty developing and fielding information systems to support its program operations, and effectively using information technology continued to remain a challenge. For example, in 2002 we reported that benefit fraud investigations had been hampered by a lack of integrated information systems. The operations units at the four INS service centers that investigate benefit fraud operate different information systems that did not interface with each other or with the units that investigate benefit fraud at INS district offices. As a result, sharing information about benefit applicants is difficult. The INS staff who adjudicate applications did not have routine access to INS’s National Automated Immigration Lookout System (NAILS). Not having access to or not using NAILS essentially means that officers may be making decisions without access to or using significant information and that benefits may be granted to individuals not entitled to receive them. Thus, INS was not in the best position to review numerous applications and detect patterns, trends, and potential schemes for benefit fraud. Further, in 2002 we reported that another INS database, the Forensic Automated Case and Evidence Tracking System (FACETS), did not contain sufficient data for managers to know the exact size and status of the laboratory’s pending workload or how much time is spent on each forensic case by priority category. As a result, managers were not in the best position to make fact-based decisions about case priorities, staffing, and budgetary resource needs. With respect to the criminal alien program, in 1999 we reported that INS lacked a nationwide data system containing the universe of foreign-born inmates for tracking the hearing status of each inmate. In response to our recommendation, INS developed a nationwide automated tracking system for the Bureau of Prisons and deployed the system to all federal institutional hearing program sites. INS said that it was working with the Florida Department of Corrections to integrate that state’s system with INS’s automated tracking system. INS also said that it planned to begin working with New York, New Jersey, and Texas to integrate their systems and then work with California, Illinois, and Massachusetts. We have not examined these new systems to determine whether they were completed as planned or to ascertain their effectiveness. In 2000 we reported that INS lacked an agencywide automated case tracking and management system that prevented antismuggling program managers from being able to monitor their ongoing investigations, determine if other antismuggling units were investigating the same target, or know if previous investigations had been conducted on a particular target. In response to our recommendation, INS deployed an automated case tracking and management system for all of its criminal investigations, including alien smuggling investigations. Again, we have not examined the new system to ascertain its effectiveness. Our review of the various program components of the interior enforcement strategy found that working-level guidance was sometimes lacking or nonexistent. INS had not established guidance for opening benefit fraud investigations or for prioritizing investigative leads. Without such criteria, INS could not be ensured that the highest-priority cases were investigated and resources were used optimally. INS’s interior enforcement strategy did not define the criteria for opening investigations of employers suspected of criminal activities. In response to our recommendation, INS clarified the types of employer-related criminal activities that should be the focus of INS investigations. INS’s alien smuggling intelligence program had been impeded by a lack of understanding among field staff about how to report intelligence information. Staff were unclear about guidelines, procedures, and effective techniques for gathering, analyzing, and disseminating intelligence information. They said that training in this area was critically needed. INS had not established outcome-based performance measures that would have helped it assess the results of its interior enforcement strategy. For example, in 2000 we reported that while INS had met its numeric goals for the number of smuggling cases presented for prosecution in its antismuggling program, it had not yet developed outcome-based measures that would indicate progress toward the strategy’s objective of identifying, deterring, disrupting, and dismantling alien smuggling. This was also the case for the INS intelligence program. INS had not developed outcome- based performance measures to gauge the success of the intelligence program to optimize the collection, analysis, and dissemination of intelligence information. In 2002 we reported that INS had not yet established outcome-based performance measures that would help it assess the results of its benefit fraud investigations. Additionally, INS had not established goals or measurement criteria for the service center operations units that conduct fraud investigation activities. INS’s interior enforcement strategy did not clearly describe the specific measures INS would use to gauge its performance in worksite enforcement. For example, in 1999 we reported that the strategy stated that INS would evaluate its performance on the basis of such things as changes in the behavior or business practices of persons and organizations, but did not explain how they expected the behavior and practices to change. And although INS indicated that it would gauge effectiveness in the worksite area by measuring change in the wage scales of certain targeted industries, it left unclear a number of questions related to how it would do this. For example, INS did not specify how wage scales would be measured; what constituted a targeted industry; and how it would relate any changes found to its enforcement efforts or other immigration-related causes. The strategy stated that specific performance measurements would be developed in the annual performance plans required by the Government Performance and Results Act. According to INS’s fiscal year 2003 budget submission, the events of September 11th required INS to reexamine strategies and approaches to ensure that INS efforts fully addressed threats to the United States by organizations engaging in national security crime. As a result, with regard to investigating employers who may be hiring undocumented workers, INS planned to target investigations of industries and businesses where there is a threat of harm to the public interest. However, INS had not set any performance measures for these types of worksite investigations. Since the attacks of September 11, 2001, and with the formation of DHS, a number of management challenges are evident. Some of the challenges discussed above carry over from the INS, such as the need for sound intelligence information, efficient use of resources and management of workloads, information systems that generate timely and reliable information, clear and current guidance, and appropriate performance measures. Other challenges are emerging. These include creating appropriate cooperation and collaboration mechanisms to assure effective program management, and reinforcing training and management controls to help assure compliance with DHS policies and procedures and the proper treatment of citizens and aliens. BICE will need to assure that appropriate cooperation and collaboration occurs between it and other DHS bureaus. For example, both the Border Patrol, now located in the Bureau of Customs and Border Protection (BCBP), and BICE’s immigration investigations program conducted alien smuggling investigations prior to the merger into DHS. These units operated through different chains of command with different reporting structures. As a result, INS’s antismuggling program lacked coordination, resulting in multiple antismuggling units overlapping in their jurisdictions, making inconsistent decisions about which cases to open, and functioning autonomously and without a single chain of command. It’s unclear at this time how the anti-smuggling program will operate under DHS. Should both BCBP’s Border Patrol and BICE’s Investigations program continue to conduct alien smuggling investigations, Under Secretary Hutchinson will need to assure that coordination and collaboration exists to overcome previous program deficiencies. The Bureau of Citizenship and Immigration Services (BCIS) is responsible for administering services such as immigrant and nonimmigrant sponsorship, work authorization, naturalization of qualified applicants for U.S. citizenship, and asylum. Processing benefit applications is an important DHS function that should be done in a timely and consistent manner. Those who are eligible should receive benefits in a reasonable period of time. However, some try to obtain these benefits through fraud, and investigating fraud is the responsibility of BICE’s Immigration Investigations program. INS’ approach to addressing benefit fraud was fragmented and unfocused. INS’ interior enforcement strategy did not address how the different INS components that conducted benefit fraud investigations were to coordinate their investigations. Also, INS had not established guidance to ensure the highest-priority cases are investigated. Secretary Ridge will need to ensure the two bureaus work closely to assure timely adjudication for eligible applicants while identifying and investigating potential immigration benefit fraud cases. BICE’s Intelligence Program is responsible for collecting, analyzing, and disseminating immigration-related intelligence. Immigration-related intelligence is needed by other DHS components such as Border Patrol agents and inspectors within BCBP and personnel within BCIS adjudicating immigration benefits. BICE will need to develop an intelligence program structure to ensure intelligence information is disseminated to the appropriate components within DHS’s other bureaus. Since the attacks of September 11, 2001, and with the formation of DHS, the linkages between immigration enforcement and national security have been brought to the fore. Immigration personnel have been tapped to perform many duties that previously were not part of their normal routine. For example, as part of a special registration program for visitors from selected foreign countries, immigration investigators have been fingerprinting, photographing, and interviewing aliens upon entry to the U.S. Immigration investigators have also participated in anti-terrorism task forces across the country and helped interview thousands of non- immigrant aliens to determine what knowledge they may have had about terrorists and terrorist activities. As part of its investigation of the attacks of September 11, the Justice Department detained aliens on immigration charges while investigating their potential connection with terrorism. An integrated Entry/Exit System, intended to enable the government to determine which aliens have entered and left the country, and which have overstayed their visas, is currently under development and will rely on BICE investigators to locate those who violate the terms of their entry visas. All of these efforts attest to the pivotal role of immigration interior enforcement in national security and expanded roles of investigators in the fight against terrorism. It is important that BICE investigators receive training to perform these expanded duties and help assure that they effectively enforce immigration laws while recognizing the rights of citizens and aliens. It is also important that DHS reinforce its management controls to help assure compliance with DHS policies and procedures. | Department of Homeland Security's (DHS) Immigration Interior Enforcement Strategy's implementation is now the responsibility of the Bureau of Immigration and Customs Enforcement (BICE). This strategy was originally created by the Immigration and Naturalization Service (INS). In the 1990s, INS developed a strategy to control illegal immigration across the U.S. border and a strategy to address enforcement priorities within the country's interior. In 1994, INS's Border Patrol issued a strategy to deter illegal entry. The strategy called for "prevention through deterrence"; that is, to raise the risk of being apprehended for illegal aliens to a point where they would consider it futile to try to enter. The plan called for targeting resources in a phased approach, starting first with the areas of greatest illegal activity. In 1999, the INS issued its interior enforcement strategy designed to deter illegal immigration, prevent immigration-related crimes, and remove those illegally in the United States. Historically, Congress and INS have devoted over five times more resources in terms of staff and budget on border enforcement than on interior enforcement. INS's interior enforcement strategy was designed to address (1) the detention and removal of criminal aliens, (2) the dismantling and diminishing of alien smuggling operations, (3) community complaints about illegal immigration, (4) immigration benefit and document fraud, and (5) employers' access to undocumented workers. These components remain in the BICE strategy. INS faced numerous challenges in implementing the strategy. For example, INS lacked reliable data to determine staff needs, reliable information technology, clear and consistent guidelines and procedures for working-level staff, effective collaboration and coordination within INS and with other agencies, and appropriate performance measures to help assess program results. As BICE assumes responsibility for strategy implementation, it should consider how to address these challenges by improving resource allocation, information technology, program guidance, and performance measurement. The creation of DHS has focused attention on other challenges to implementing the strategy. For example, BICE needs to coordinate and collaborate with the Bureau of Citizenship and Immigration Services (BCIS) for the timely and proper adjudication of benefit applications, and with the Bureau of Customs and Border Protection (BCBP) to assist in antismuggling investigations and sharing intelligence. In addition, BICE needs to assure that training and internal controls are sufficient to govern investigators' antiterrorism activities when dealing with citizens and aliens. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The electricity industry is based on four distinct functions: generation, transmission, distribution, and system operations. (See fig. 1.) Once electricity is generated—whether by burning fossil fuels; through nuclear fission; or by harnessing wind, solar, geothermal, or hydro energy—it is sent through high-voltage, high-capacity transmission lines to electricity distributors in local regions. Once there, electricity is transformed into a lower voltage and sent through local distribution wires for end-use by industrial plants, commercial businesses, and residential consumers. A unique feature of the electricity industry is that electricity is consumed at almost the very instant that it is produced. As electricity is produced, it leaves the generating plant and travels at the speed of light through transmission and distribution wires to the point of use, where it is immediately consumed. In addition, electricity cannot be easily or inexpensively stored and, as a result, must be produced in near-exact quantities to those being consumed. Because electric energy is generated and consumed almost instantaneously, the operation of an electric power system requires that a system operator balance the generation and consumption of power. The system operator monitors generation and consumption from a centralized location using computerized systems and sends minute-by-minute signals to generators reflecting changes in the demand for electricity. The generators then make the necessary changes in generation in order to maintain the transmission system safely and reliably. Absent such continuous balancing, electrical systems would be highly unreliable, with frequent and severe outages. Historically, the electric industry developed initially as a loosely connected structure of individual monopoly utility companies, each building power plants and transmission and distribution lines to serve the exclusive needs of all the consumers in their local areas. Such monopoly utility companies were typically owned by shareholders and were referred to as investor- owned utilities. In addition to these investor-owned utilities, several types of publicly owned utilities, including rural cooperatives, municipal authorities, state authorities, public power districts, and irrigation districts, also began to sell electricity. About one-third of these publicly owned utilities are owned collectively by their customers and generally operate as not-for-profit entities. Further, nine federally owned entities, including the Tennessee Valley Authority and the Bonneville Power Administration, also generate and sell electricity—primarily to cooperatives, municipalities, and other companies that resell it to retail consumers. Because the utilities operated as monopolies, wholesale and retail electricity pricing was regulated by the federal government and the states. The Public Utility Holding Company Act of 1935 (PUHCA) and the Federal Power Act of 1935 established the basic framework for electric utility regulation. PUHCA, which required federal regulation of these companies, was enacted to eliminate unfair practices by large holding companies that owned electricity and natural gas companies in several states. The Federal Power Act created the Federal Power Commission—a predecessor to FERC—and charged it with overseeing the rates, terms, and conditions of wholesale sales and transmission of electric energy in interstate commerce. FERC, established in 1977, approved interstate wholesale rates based on the utilities’ costs of production plus a fair rate of return on the utilities’ investment. States retained regulatory authority over retail sales of electricity, electricity generation, construction of transmission lines within their boundaries, and intrastate transmission and distribution. Generally, states set retail rates based on the utility’s cost of production plus a rate of return. The goal of federal efforts to restructure the electricity industry is to increase competition in order to provide benefits to consumers, such as lower prices and access to a wider range of services, while maintaining reliability. Over the past 13 years, the federal government has taken a series of steps to encourage this restructuring that generally fall into four key categories: (1) market structure, (2) supply, (3) demand, and (4) oversight. Regarding market structure, federal restructuring efforts have changed how electricity prices are determined, replacing cost-based regulated rates with market-based pricing in many wholesale electricity markets. In this regard, efforts undertaken predominantly by FERC have helped to encourage a shift from a market structure that is based on monopoly utilities providing electricity to all customers at regulated rates to one in which prices are determined largely by the interaction of supply and demand. In prior work, we reported that increasing competition required that at least three key steps be taken: increasing the number of buyers and sellers, providing adequate market information, and allowing potential market participants the freedom to enter and exit the industry. In terms of supply, federal restructuring efforts have generally focused on allowing new companies to sell electricity, requiring the owners of the transmission systems to allow these new companies to use their lines, and approving the creation of new entities to fairly administer these markets. The Energy Policy Act of 1992 made it easier for new companies, referred to as nonutilities, to enter the wholesale electricity market, which expanded the number of companies that can sell electricity. For example, we reported that from 1992 through 2002, FERC had authorized 850 companies to sell electricity at market-based rates. To allow these companies to buy and sell electricity, FERC also required that transmission owners under its jurisdiction, generally large utilities, allow all other entities to use their transmission lines under the same prices, terms, and conditions as those that they apply to themselves. To do this, FERC issued orders that required the regulated monopoly utilities—which had historically owned the power plants, transmission systems, and distribution lines—to separate their generation and transmission businesses. In addition, in response to concerns that some of these new companies received unfair access to transmission lines, which were mostly still owned and operated by the former utilities, FERC encouraged the utilities that it regulated to form new entities to impartially manage the regional network of transmission lines and provide equal access to all market participants, including nonutilities. These entities, including independent system operators (ISOs) and regional transmission organizations (RTOs), operate transmission systems covering significant parts of the country. One of these, the California ISO, currently oversees the electricity network spanning most of the state of California. Another important effort to facilitate the interaction of buyers and sellers was FERC’s approval of the creation of several wholesale markets for electricity. These markets created centralized venues for market participants to buy and sell electricity. Finally, FERC has undertaken efforts to improve the availability and accuracy of price information used by suppliers, such as daily market prices reported to news services, and has established guidelines for the conduct of sellers of wholesale electricity, requiring these entities to, among other things, accurately report prices and other data to news services. Federal efforts to affect demand at the wholesale level have focused on encouraging prices in wholesale markets to be established by the direct interaction between buyers and sellers in these markets. We previously reported that there were several centralized markets in which suppliers and buyers submitted bids to buy and sell electricity and that other types of market-based trading were also emerging, such as Internet-based trading systems. However, there have been few federal efforts to directly affect prices at the retail level, where most electricity that is consumed is purchased, because states, and not the federal government, have regulatory authority for overseeing retail electricity markets. As part of its efforts to have prices set by the direct interaction of supply and demand, FERC has approved proposals to incorporate so-called “demand-response” programs into the markets that it oversees. These programs, among other things, allow electricity buyers to see electricity prices as they change throughout the day and provide the choice to sell back electricity that they otherwise would have used. For example, we reported that FERC had approved one such program in New York State that allows consumers to offer to sell back specific amounts of electricity that they are willing to forgo at prices that they determine. More recently, the Energy Policy Act of 2005 requires FERC to study issues such as demand-response and report on its findings to the Congress. Finally, restructuring has fundamentally changed how electricity markets are overseen and regulated. Historically, FERC had ensured that prices in wholesale electricity markets were “just and reasonable” by approving rates that allowed for the recovery of justifiable costs and providing for a regulated rate of return, or profit. To ensure that prices are just and reasonable in today’s restructured electricity markets, FERC has shifted its regulatory role to approving rules and market designs, proactively monitoring electricity market performance to ensure that markets are working rather than waiting for problems to develop before acting, and enforcing market rules. As part of its decision to approve the creation of market designs that include ISOs and RTOs, FERC approved the creation of market monitoring units within these entities. These market monitors are designed to routinely collect information on the activities in these markets including prices; perform up-to-the-minute market monitoring activities, such as examining whether prices appear to be the result of fair competition or market manipulation; and can impose penalties, such as fines, when they identify that rules have been violated. More recently, the Energy Policy Act of 2005 granted FERC authority to impose greater civil penalties on companies that are found to have manipulated the market. Federal restructuring efforts, combined with efforts undertaken by states, have created a patchwork of electricity markets, broadened electricity supplies, disconnected wholesale and retail markets, and shifted how the electricity industry is overseen. Taken together, these developments have produced some positive and some negative outcomes for consumers. In terms of market structure, we previously reported that the combined effects of the federal efforts and those of some states have created a patchwork of wholesale and retail electricity markets. In the wholesale markets, there is a combination of restructured and traditional markets because FERC’s regulatory authority is limited. As a result, some entities— including municipal utilities and cooperatively owned utilities—have not been required to make the changes FERC has required others to make. As shown in figure 2, collectively the areas not generally subject to FERC jurisdiction span a significant portion of the country. In addition, even where FERC has clear jurisdiction, it has historically approved a variety of different rules that govern how each of the transmission networks is controlled and what types of wholesale markets may exist. In the retail electricity markets, state utility commissions or local entities historically have controlled how prices were set, as well as approved power plants, transmission lines, and other capital investments. Because each state performed these functions slightly differently, these rules vary. In addition, many states also have shifted the retail markets that they oversee toward competition. As we reported in 2002, 24 states and the District of Columbia had enacted legislation or issued regulations that planned to open their retail markets to competition. As of 2004, 17 states had actually opened their retail markets to competition, according to the Energy Information Administration. One of these states, California, opened its retail markets to competition but has taken steps to limit the extent of competition. In terms of supply, efforts to restructure the electricity industry by the federal government and some states have broadened electricity markets overall—shifting the focus from state and/or local supply to multistate or regional supply. In particular, efforts at wholesale restructuring have led to a significant change in the way electricity is supplied in those markets. The introduction of ISOs and RTOs in many areas has provided open access to transmission lines, allowing more market participants to compete and sell electricity across wide geographic regions and multiple states. In addition, in some parts of the country, overall supply has grown as a result of the large increase in new generating capacity that has been built by nonutility companies, while other regions have witnessed smaller increases in supply. For example, we reported that, by 2002, Texas had added substantial amounts of generating capacity—more than double the forecasted amount needed through 2004. In contrast, in California only about 25 percent of the forecasted need had been built over the same period, and the region witnessed a historic market disruption costing consumers billions of dollars. Similarly, the opening of retail markets has also widened the scope of electricity markets by allowing new and different entities to sell electricity, which works to further broaden markets because these retail sellers must either build or buy a power plant or rely on wholesale markets. Finally, FERC has improved the transparency of wholesale markets, a key requirement of competitive markets, by increasing the availability and accuracy of price and other market information. In terms of demand, while federal efforts have encouraged price setting by the interaction of supply and demand, this approach has not been widely adopted in retail markets. Even though FERC and other electricity experts have determined that it is important for demand to be responsive to prices and other factors for competitive markets to operate efficiently, as we reported in 2004, the use of these programs remains limited. In many retail markets, including some states where retail markets have been opened to competition, prices are still set so that rates are either flat or have been frozen. In either case, prices are not reflective of the hourly costs of providing electricity. In some cases, demand-response programs are in place but are aimed at only certain types of customers, such as some commercial and industrial customers. Overall, these customers account for only a small share of total demand. As a result, in this hybrid system, wholesale and retail markets remain disconnected, with competition setting wholesale prices in many areas, and state regulation setting retail prices in many states. Regulatory oversight of the electricity industry remains divided among federal, regional, and state entities. As we have previously reported, FERC initially did not adequately revise its regulatory and oversight approach to respond to the transition to competitive energy markets. However, it has made progress in recent years in defining its role, developing a framework for overseeing the markets, and beginning to use an array of data and analytical tools to oversee the market. In particular, FERC established the Office of Market Oversight and Investigations in 2002, which oversees the markets by monitoring its enforcement hotline for tips on misconduct; conducting investigations and audits; and reviewing large amounts of data—including wholesale spot and futures prices, plant outage information, fuel storage level data, and supply and demand statistics—for anomalies that could lead to potential market problems. In addition to FERC’s own efforts, substantial oversight also now occurs at the regional level, through ISO and RTO market monitoring units. These units monitor their region’s market to identify design flaws, market power abuses, and opportunities for efficiency improvements and report back to FERC periodically. Finally, states’ oversight roles vary. Those states that have not restructured their markets retain key roles in overseeing and regulating electricity markets directly and indirectly through such activities as setting rates to recover costs and siting of power plants and transmission lines and other capital investments needed to supply electricity. The ability of states that have restructured their retail markets, to oversee their markets is more limited, according to experts. The effects of restructuring on consumers have been mixed. While most studies evaluating wholesale electricity markets, including our own assessment, have determined that progress has been made in introducing competition in wholesale electricity markets, results at the retail level have been difficult to measure. For example, in 2002, we reported that prices generally fell after restructuring and fell in particular in many areas that had implemented retail restructuring. However, we were unable to attribute these price decreases solely to restructuring, since several other factors, such as lower prices for natural gas and other fuels used in the production of electricity, could have contributed to the price decreases. Furthermore, while some consumers had benefited by paying lower prices, others have experienced high prices and market manipulation. For example, in 2002, we reported that nationally, consumers benefited from price declines of as much as 15 percent since federal restructuring efforts began. However, as consumers in California and across other parts of the West will attest, there have been many negative effects, including higher prices and market manipulation. More recently, electricity prices have risen, potentially the result of higher prices for fuels such as natural gas and petroleum, and other factors. We have identified four key challenges that, if addressed, could benefit consumers and the restructured electricity markets that serve them. With several fundamentally different electricity market structures in place simultaneously in various parts of the country, it is important that these markets work together better in order to meet regional needs. As we previously reported, two aspects of the current electricity markets serve to limit the benefits expected from restructuring. First, FERC’s limited authority has meant that significant parts of the market and significant amounts of transmission lines have not been subject to FERC’s effort to restructure wholesale markets—creating “holes” in the national restructured wholesale market. These gaps, where efforts to open wholesale markets have not been undertaken, may limit the number of potential participants and the types of transactions that can occur, thereby limiting the benefits expected from competition. Second, where FERC has clear authority, it has historically approved a range of rules for how the different transmission systems and centralized wholesale markets operate—creating “seams” where these different jurisdictions meet and the rules change. We have previously noted that the lack of consistent rules among restructured wholesale markets limits the extent of competition across wholesale markets and, in turn, limits the benefits expected from competition. California experienced this firsthand, as it tried to “cap” wholesale electricity prices in its state market—establishing rules different from those in the markets surrounding California. The lower price cap in California, coupled with an exemption for electricity imports, created incentives to sell electricity to areas outside the state (where prices were higher) and later import it (because imports were exempt from the price cap). FERC has acknowledged that the lack of consistent rules can lead to discrimination in access, raise costs, and lead to reliability problems. As a result, FERC made an effort to standardize the various wholesale market designs under its jurisdiction. However, these efforts met with sharp criticism from some industry stakeholders. FERC ended its effort to require a single market design in all regions and has, instead, promoted voluntary participation in RTOs and having the RTOs work together to reconcile their differences. In the end, today’s patchwork of wholesale market structures, with holes and seams, is at odds with the physics of the interdependent electricity industry, where electrons travel at the speed of light and do not stop neatly at jurisdictional boundaries. Successfully developing markets will require the alignment of market structures and rules in order to reconcile them with these physical certainties. Broadening of restructured electricity markets has made the federal government, the states, and localities more dependent on each other in order to ensure a sufficient supply of electricity. We previously concluded that, as federal and state restructuring efforts broaden electricity markets to span multiple states, states will become more interdependent on each other for a reliable electricity supply. Consequently, one state’s problems acquiring and maintaining an adequate supply can now affect its neighbors. For example, in the lead up to the western electricity crisis in 2000-2001, few power plants were built to meet the rising demand in California, which became dependent on power plants located outside the state. However, when prices began to rise, this affected consumers, both inside and outside California. We previously reported these higher prices had implications for California consumers such as higher electricity bills, as well as others located outside the state, costing billions of additional dollars. Because of these negative outcomes, some have questioned whether restructuring will eventually benefit consumers. More broadly, rising interdependence has significant implications for many industry stakeholders, especially in light of the shift in how plants are financed and built. In the past, monopoly utilities proposed, and regulators approved, the construction of new power plants and other infrastructure. Today, policymakers at all levels of government must recognize that providing consumers with reliable electricity in competitive markets requires private investors to make reasoned investments. We have reported that these private investors make decisions on investing by balancing their perceptions of potential risk and profitability. Further, we concluded that the reliability of the electricity system and, more generally, the success of restructuring, now hinges on whether these developers choose to enter a market and how quickly they are able to respond to the need for new power plants. The implications of this broadening of electricity markets are important, since it has occurred while most of the primary authorities associated with building new power plants, such as state energy siting or local land use planning, still rest with states and localities. As we have reported, there is sometimes considerable variation across states and localities in how long these processes take and how much they cost, and building new power plants can take a year or more once all the approvals are obtained. Because of the broader electricity markets, one state’s or locality’s processes and decisions provide signals affecting private investors’ perceptions of the risk or profitability of making investments in local areas and can have long-lasting implications for the entire region. In this context of growing interdependence for adequate electricity supplies, our work shows that it is important for federal, state, and local entities to provide timely, clear, and consistent signals that allow private developers to make the kinds of reasonable and long-term investments that are needed. As we have previously reported, for competitive wholesale electricity markets to provide the full benefits expected of them, it is essential that they be connected to the retail markets, where most electricity is sold and consumed. Otherwise, hybrid electricity markets—wholesale prices set by competition and retail prices set by regulation—will be difficult to manage because consumers at the retail level can unknowingly drive up wholesale prices during periods when electricity supplies are limited. This occurs when consumers do not see prices at the retail level that accurately reflect the higher wholesale market prices. Seeing only these lower electricity prices, consumers use larger quantities of electricity than they would if they saw higher prices, which raises costs and can risk reliability. We have noted that, in this environment (consumers seeing low retail prices during periods of high wholesale prices) consumers have little incentive to reduce their consumption during periods when prices are high or reliability is at risk. The appeal of seeming to insulate retail consumers from wholesale market fluctuations may be compelling, but most experts agree that the lack of significant demand response can actually lead to higher and more volatile prices. In 2004, we concluded that this system makes it difficult for FERC to ensure that prices in wholesale markets are just and reasonable. We further concluded that connecting wholesale and retail markets through demand-response programs such as real-time pricing or reliability-based programs would help competitive electricity markets function better, enhance the reliability of the electricity system, and provide important signals that consumers should consider investments into energy-efficient equipment. Such signals would work to reduce overall demand in a more permanent way. While FERC has been supportive of increasing the role of demand-response programs in the wholesale markets that it oversees, there have been limited efforts to do so in retail markets—these markets are outside FERC’s jurisdiction and overseen by the states. Some states, such as California, have a long history with demand-response programs and have conducted more recent experiments with using it in more widespread ways. Sharing and building upon these and other examples could help develop efficient ways to bring the consumers who flip the light switches into the markets responsible for ensuring that their lights go on. Since electricity travels at the speed of light, retail markets where electricity is consumed are tightly connected to the wholesale markets that supply these retail markets. As a result, much of the success of federal restructuring of the wholesale markets relies on actions taken at the state level to bring consumers into the market. Significant changes in how oversight is carried out in competitive markets, combined with the divided regulatory authority over the electricity industry, has made effective oversight difficult. We previously reported that FERC, the states, and other market monitors were neither fully monitoring the overall performance of all wholesale and retail markets nor collecting sufficient data to do so, thus limiting the opportunity to meaningfully compare performance. At the federal level, FERC protects customers primarily through ensuring that prices in the wholesale markets are just and reasonable. In prior work, we found that FERC did not initially revise its oversight approach adequately in response to restructured markets, resulting in markets that were not adequately overseen. However, more recently, we reported that FERC has made significant efforts to revise its oversight strategy to better align with its new role overseeing restructured markets, has taken a more proactive approach to monitoring the performance of markets, and has better aligned its workforce to fit its needs in these new markets. Recent actions will require further changes to FERC’s role. The Energy Policy Act of 2005 provided FERC additional authority to establish reliability rules for all “users, owners, and operators” of the transmission system. We had previously reported that this change would be desireable, but it is too early to judge its success. At the state level, oversight varies widely. States that have retained traditionally regulated retail markets continue to require substantial amounts of information to help them set the regulated prices that consumers see. The states that now feature restructured retail markets face a sharply different oversight role of policing their state-level retail markets for misbehavior and signs of market malfunction. The introduction of the market monitoring units within ISOs and RTOs adds a new layer of regional oversight to the existing federal and state roles. While authority over the electricity industry is divided, restructuring has served to make the success of each of the oversight efforts more interdependent, and FERC and the states will have to rely on each other, as well as on new entities, to a greater degree than before to be successful. It is becoming increasingly clear that many of the challenges facing the electricity industry are rooted in the interdependence of actions taken by federal, state, local, and private entities, as well as consumers. Accordingly, the individual challenges we have discussed follow a central theme—the need to integrate the various ongoing activities and efforts and harmonize them in a way that improves the functioning of the marketplace while providing adequate oversight to protect electricity consumers. This will not be easy because it requires what is, at times, most difficult: collaboration and cooperation among entities with a history of independence. Successfully restructuring the electricity industry is an ongoing process that will require rethinking old issues, such as jurisdictional responsibilities, and applying new and creative ideas to help bridge the current gap between wholesale and retail markets. Only if interdependent parties work together will electricity restructuring succeed in delivering benefits to U.S. consumers by way of healthy, viable, and competitive markets. Not adequately addressing these issues could result in an electricity industry that does not provide consumers with sufficient quantities of the reliable, reasonably priced electricity that has been a mainstay of our nation’s economic and social progress. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution until 15 days after the report date. At that time, we will send copies of this report to appropriate congressional committees. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Office of Congressional Relations and Office of Public Affairs may be found on the last page of this report. GAO staff who contributed to this report are listed in the appendix. In addition to the contact named above, Dan Haas, Jon Ludwigson, and Kris Massey made key contributions to this report. Barbara Timmerman, Susan Iott, and Nancy Crothers also made important contributions. Meeting Energy Demand in the 21st Century: Many Challenges and Key Questions. GAO-05-414T. Washington, D.C.: March 16, 2005. Electricity Markets: Consumers Could Benefit from Demand Programs, but Challenges Remain. GAO-04-844. Washington, D.C.: August 13, 2004. Energy Markets: Additional Actions Would Help Ensure That FERC’s Oversight and Enforcement Capability Is Comprehensive and Systematic. GAO-03-845. Washington, D.C.: August 15, 2003. Electricity Markets: FERC’s Role in Protecting Consumers. GAO-03-726R. Washington, D.C.: June 6, 2003. Energy Markets: Concerted Actions Needed by FERC to Confront Challenges That Impede Effective Oversight. GAO-02-656. Washington, D.C.: June 14, 2002. Electricity Restructuring: 2003 Blackout Identifies Crisis and Opportunity for the Electricity Sector. GAO-04-204. Washington, D.C.: November 18, 2003. Electricity Restructuring: Action Needed to Address Emerging Gaps in Federal Information Collection. GAO-03-586. Washington, D.C.: June 30, 2003. Lessons Learned from Electricity Restructuring: Transition to Competitive Markets Underway, but Full Benefits Will Take Time and Effort to Achieve. GAO-03-271. Washington, D.C.: December 17, 2002. Restructured Electricity Markets: California Market Design Enabled Exercise of Market Power. GAO-02-828. Washington, D.C.: June 21, 2002. Restructured Electricity Markets: Three States' Experiences in Adding Generating Capacity. GAO-02-427. Washington, D.C.: May 24, 2002. Electric Utility Restructuring: Implications for Electricity R&D. T-RCED- 98-144. Washington, D.C.: March 31, 1998. Restructured Electricity Markets: California Market Design Enabled Exercise of Market Power. GAO-02-828. Washington, D.C.: June 21, 2002. California Electricity Market: Outlook for Summer 2001. GAO-01-870R. Washington, D.C.: June 29, 2001. California Electricity Market Options for 2001: Military Generation and Private Backup Possibilities. GAO-01-865R. Washington, D.C.: June 29, 2001. Energy Markets: Results of Studies Assessing High Electricity Prices in California. GAO-01-857. Washington, D.C.: June 29, 2001. Bonneville Power Administration: Better Management of BPA’s Obligation to Provide Power Is Needed to Control Future Costs. GAO-04- 694. Washington, D.C.: July 9, 2004. Bonneville Power Administration: Long-Term Fiscal Challenges. GAO-03- 918R. Washington, D.C.: June 27, 2003. Federal Power: The Evolution of Preference in Marketing Federal Power. GAO-01-373. Washington, D.C.: February 8, 2001. | The electricity industry is in the midst of many changes, collectively referred to as restructuring, evolving from a highly regulated environment to one that places greater reliance on competition. This restructuring is occurring against a backdrop of constraints and challenges, including a shared responsibility for implementing and enforcing local, state, and federal laws affecting the electricity industry and an expected substantial increase in electricity demanded by consumers by 2025, requiring significant investment in new power plants and transmission lines. Furthermore, several recent incidents, including the largest blackout in U.S. history along the East Coast in 2003 and the energy crisis in California and other parts of the West in 2000 and 2001, have drawn attention to the need to examine the operation and direction of the industry. At Congress's request, this report summarizes results of previous GAO work on electricity restructuring, which was conducted in accordance with generally accepted government auditing standards. In particular, this report provides information on (1) what the federal government has done to restructure the electricity industry and the wholesale markets that it oversees, (2) how electricity markets have changed since restructuring began, and (3) GAO's views on key challenges that remain in restructuring the electricity industry. Over the past 13 years, the federal government has taken a variety of steps to restructure the electricity industry with the goal of increasing competition in wholesale markets and thereby increasing benefits to consumers, including lower electricity prices and access to a wider array of retail services. In particular, the federal government has changed (1) how electricity is priced--shifting from prices set by regulators to prices determined by markets; (2) how electricity is supplied--including the addition of new entities that sell electricity; (3) the role of electricity demand--through programs that allow consumers to participate in markets; and (4) how the electricity industry is overseen--in order to ensure consumer protection. Federal restructuring efforts, combined with efforts undertaken by states, have created a patchwork of wholesale and retail electricity markets; broadened electricity supplies; disconnected wholesale markets from retail markets, where most demand occurs; and shifted how the electricity industry is overseen. Taken together, these developments have produced some positive outcomes, such as progress in introducing competition in wholesale electricity markets, as well as some negative outcomes, such as periods of higher prices. We have identified four key challenges to the effective operation of the restructured electricity industry: making wholesale markets work better together so that restructuring can deliver the benefits to consumers that were expected; providing clear and consistent signals to private investors when new plants are needed so that there are adequate supplies to meet regional needs; connecting wholesale markets to retail markets through consumer demand programs to keep prices lower and less volatile; and, resolving divided regulatory authority to ensure that these markets are adequately overseen. The theme cutting across each of these challenges is the need to better integrate the various market structures, factors affecting supply and demand, and various efforts at market oversight. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Federal employees are routinely surveyed through OPM’s administration of the FEVS, which is administered to collect data on federal employees’ perceptions about how effectively agencies are managing their workforces. The FEVS is a tool that measures employees’ perceptions of whether, and to what extent, conditions that characterize successful organizations are present in their agencies, according to OPM. This survey was administered for the first time in 2002 and then repeated in 2004, 2006, 2008, 2010, 2011, and April through June 2012. The survey provides general indicators of how well the federal government is managing its human resources management systems. It also serves as a tool for OPM to assess individual agencies and their progress on strategic management of human capital, and gives senior managers employee perspectives on agency management. Specifically, the survey includes categories of questions asking employees for their perspectives on their work experience, work unit, agency, supervisor, leadership, and satisfaction. OPM intends for agency managers to use the findings to develop policies and action plans for improving agency performance. In 2011, OPM provided a summary of FEVS findings to DHS. In that report, OPM summarized DHS’s survey results relative to governmentwide averages and provided positive and negative response levels for each survey question. Also included in the report was action planning guidance for using FEVS results to improve human capital management. Pub. L. No. 107-295, § 1304, 116 Stat. 2315, 2289 (2002) (codified at 5 U.S.C. § 1103(c)). management, which is focused on agencies having quality people with the appropriate competencies in mission-critical activities. The FEVS job satisfaction index is one of the metrics used by OPM to assess whether agencies are effectively managing the talent management system. The FEVS provides one source of information for evaluating success on other HCAAF standards as well by measuring responses to groups of FEVS questions for four indices. The four index measures are: Leadership and Knowledge Management; Results-Oriented Performance Culture; Talent Management; and Job Satisfaction. In addition, in 2011, OPM added an index to measure employee engagement, which OPM defines as the extent to which an employee is immersed in the content of the job and energized to spend extra effort in job performance. DHS’s OCHCO is responsible for implementing policies and programs to recruit, hire, train and retain DHS’s workforce. As the department-wide unit responsible for human capital issues within DHS, OCHCO provides OPM with a DHS-wide action plan every other year, with the next plan due in January 2013. OCHCO also provides guidance and oversight to the DHS components related to morale issues. For example, OCHCO provides a survey analysis and action planning tool that the components must use in response to FEVS results to develop action plans for These plans are to state improving employees’ positive scores.objectives and identify actions to be taken in response to survey results. OCHCO also has provided oversight by reviewing and providing feedback on component action plans. Data from the 2011 FEVS show that DHS employees have lower average levels of job satisfaction and engagement overall and across most demographic groups available for comparison, such as pay grade, when compared with the average for the rest of the federal government. Levels of satisfaction and engagement vary across components, with some components reporting satisfaction or engagement above the average for the rest of the government. Similarly, these measures of morale vary within components as well, with some employee groups reporting higher morale than other groups within the same component. As shown in figure 1, DHS employees generally reported improvements in job satisfaction index levels since 2006 that narrowed the gap between DHS and the governmentwide average. However, employees continue to indicate less satisfaction than the governmentwide average. Partnership analysis of FEVS data also indicates consistent levels of low employee satisfaction relative to other federal agencies. Similar to its 2011 ranking, 31st of 33 federal agencies, the Partnership ranked DHS 28th of 32 in 2010, 28th of 30 in 2009, and 29th of 30 in 2007 in the Best Places to Work ranking on overall scores for employee satisfaction and commitment. Our analyses of 2011 FEVS results also indicate that average DHS-wide employee satisfaction and engagement scores were consistently lower when compared with average non-DHS employee scores in the same demographic groups. As shown in figure 2, comparisons of DHS with non-DHS employees by supervisory status, pay group, and tenure indicate that satisfaction and engagement are lower across many of the For DHS groups where statistically significant differences are evident.example, across pay categories DHS satisfaction and engagement were lower than the scores for the same non-DHS employee pay groups, with the exception of senior executives, senior leaders, employees with less than 1 year of tenure, and General Schedule pay grades 1-6.job satisfaction and engagement scores for DHS management and non- management employees were lower than for the same non-DHS employee groups. DHS and the selected components have taken steps to understand morale problems, such as holding focus groups, implementing an exit survey, and routinely analyzing FEVS results. On the basis of FEVS results, DHS and the selected components planned actions to improve FEVS scores. However, we found that DHS could enhance its survey analysis and monitoring of action plan results. In addition, according to DHS’s Integrated Strategy for addressing the implementing and transforming high risk area, DHS has begun implementing activities to address morale but has not yet improved DHS’s scores on OPM’s job satisfaction index or its ranking on the Partnership’s Best Places to Work in the Federal Government. DHS’s OCHCO has taken several steps to understand morale problems DHS-wide. Specifically, since 2007, OCHCO: Conducted focus groups DHS-wide in 2007 to determine employee concerns related to morale, which identified employee concerns in areas of leadership, communication, empowerment, and resources. Performed statistical analysis in 2008 to identify workplace factors that drove employee job satisfaction, finding that the DHS mission and supervisor support, among other things, drove employee job satisfaction. Initiated an exit survey, first administered DHS-wide in 2011, to understand why employees chose to leave their position. The survey found lack of quality supervision and advancement opportunities were the top reasons for leaving. Analyzed 2011 FEVS results, among other things, showing where lower scores on HCAAF indices were concentrated among several components—Intelligence and Analysis, TSA, ICE, National Protection and Programs Directorate, and the Federal Emergency Management Agency (FEMA). Launched an Employee Engagement Executive Steering Committee (EEESC) in January 2012 that will identify action items for improving employee engagement by September 2012, according to OCHCO officials. The selected components also evaluated FEVS results to identify morale problems and considered additional information sources. For example: TSA convened a corporate action planning team in March 2011, as part of its response to FEVS results, which relied on data sources such as the TSA-administered exit survey, employee advisory groups, and an online employee suggestion tool, to gain perspectives on systemic challenge areas and to develop plans to address morale, according to TSA officials. TSA’s action plan for improving morale, based on these sources, was completed in July 2012. ICE considered results of a Federal Organizational Climate Survey (FOCS), last completed in March 2012, and held focus groups to gauge the extent to which employees view ICE as having an organizational culture that promotes diversity. CBP launched a quarterly online employee survey in 2009 to solicit opinions on one specific topic per quarter, such as use of career development resources and how the resources contributed to employees’ professional growth at CBP. The Coast Guard relied on an Organizational Assessment Survey (OAS), last administered by OPM in 2010, to understand employee morale. The OAS solicits opinions on a range of topics, including job satisfaction, leadership, training, innovation, and use of resources. It included civilian and military Coast Guard personnel, but is not administered governmentwide so comparisons between the Coast Guard and other federal employees are limited to organizations that may use the OAS, according to Coast Guard officials. Appendix III provides more detailed descriptions of DHS’s steps to address morale problems and selected components’ 2011 FEVS analysis methods and findings. Appendix IV provides additional information on the selected components’ data sources beyond FEVS for evaluating root causes of morale, including a summary of results and how the information was used by the components. For the 2011 FEVS, DHS and the selected components completed varying levels of analyses to determine the root causes of low morale. However, DHS and the selected components conducted limited analysis in several areas that is not consistent with OPM and Partnership guidance that lays out useful factors for evaluating root causes of morale problems through FEVS analysis, as shown in figure 4. Usage of the three factors described in figure 4 varied across DHS-wide and component-level 2011 FEVS analyses we reviewed. In some instances, the factors were partially or not used. For example: Demographic group comparisons. According to our reviews of OCHCO’s analyses, OCHCO’s DHS-wide analyses did not include evaluations of demographic group differences on morale-related issues for the 2011 FEVS. According to OCHCO officials, DHS’s Office of Civil Rights and Civil Liberties reviews survey results to identify diversity issues that may be reflected in the survey, and OCHCO officials considered these results when developing one of the current (as of August 2012) DHS action plans to create policies that identify barriers to diversity. In 2007 and 2009, years in which DHS administered the Annual Employee Survey (AES), demographic comparisons were made. For example, on the basis of 2009 AES data, DHS found no significant demographic differences other than supervisors’ positive responses to questions were generally higher than those of non-supervisors and differences among pay grade levels. Because OPM now administers the survey each year, DHS is not able to make significant demographic group comparisons because of the format of the data provided by OPM, according to OCHCO officials. However, we obtained FEVS data from OPM that allowed us to make demographic group comparisons. For example, we compared DHS and non-DHS employee satisfaction and engagement scores across available demographic groups and found that both satisfaction and engagement were generally lower for DHS employees, which is summarized in appendix I, table 5. For the DHS component analyses we reviewed, TSA and CBP conducted some demographic analysis. For example, TSA compared screeners, Federal Security Director staff, Federal Air Marshals, and headquarters staff on each FEVS dimension (e.g., work experiences, supervisor/leader, satisfaction, and work/life). As a result, TSA was able to identify screeners as having survey scores below those of other TSA employee groups. CBP also compared race, ethnicity, gender, and program office scores. CBP found that no significant differences were present in the positive responses to the 2011 FEVS core questions when comparing race, ethnicity and gender, and found that Border Patrol employees reported higher job satisfaction than field operations employees (74 versus 66 percent on the job satisfaction index). In contrast, the Coast Guard did not conduct analysis in addition to data that was provided by DHS OCHCO. Because OCHCO’s data did not include demographic information for the 2011 FEVS, Coast Guard did not make demographic group comparisons. ICE and CBP officials stated that they did not have access to 2011 FEVS data files necessary to conduct more detailed demographic comparisons. However, as shown in appendix I, we were able to make various demographic comparisons based on a more detailed data file provided by OPM, which is similar to a file that OPM makes available to agencies and the public. Benchmarking against similar organizations. TSA benchmarked its FEVS results against results from similar organizations, by comparing results with CBP, and OCHCO’s DHS-wide analysis highlighted Partnership rankings data, showing DHS’s position relative to the positions of other federal agencies as a Best Place to Work. Similarly, ICE benchmarked its FEVS results overall and for program offices, such as homeland security investigators, against other DHS components, including the U.S. Secret Service and CBP. For the 2011 FEVS, CBP performed more limited benchmarking, by comparing FEVS results with governmentwide averages. According to CBP officials, when analyzing annual employee surveys prior to 2011, CBP benchmarked its results against agencies with high positive FEVS scores, such as the Social Security Administration, the Federal Bureau of Investigation, the Internal Revenue Service, and the Nuclear Regulatory Commission. CBP is in the initial planning phase of a larger benchmarking project that would benchmark CBP against foreign immigration, customs, and agriculture inspection agencies, such as the Canadian Border Services Agency and the Australian Customs and Border Protection Service. If approved, this benchmarking project is expected to occur in fiscal year 2013, according to CBP officials. The Coast Guard did not perform FEVS benchmarking analysis, according to the documentation we reviewed, but did make OAS-based comparisons between the Coast Guard and other organizations that use the OAS, according to Coast Guard officials. Linkage of root causes with action plans. For both DHS-wide and selected component action plans, FEVS questions with low scores were linked with action plan areas. For example, in the DHS-wide action plan, low scores on employee satisfaction with opportunities to get a better job in the organization were linked to action plan items for enhancing employee retention. However, the extent to which DHS and the components used root causes found through other analyses to inform their action plans, such as quarterly exit survey results or additional internal component surveys, was not evident in action plan documentation (see appendix IV for a description of these additional root cause analyses). For example, OCHCO’s DHS-wide action plan was last updated based on 2010 FEVS data and therefore did not rely on data from the DHS 2011 exit survey, since those results were not published until January 2012. Similarly, the EEESC was launched in January 2012 and therefore its efforts are not yet documented in DHS-wide action planning documents. According to OCHCO officials, the 2010 DHS-wide action plan includes consideration of results from OCHCO’s 2008 statistical analysis identifying key drivers of job satisfaction and results from the 2007 focus groups. However, linkage to items in the DHS-wide action plan to these results is not clearly identified because a new action plan template OPM introduced in 2010 did not provide an area to identify the linkage between each action and the driver, according to OCHCO officials. In addition, DHS’s September 2009 action plan indicates consideration of the 2008 key driver analysis and 2007 focus group effort that led to a focus on leadership effectiveness initiatives. According to CBP and TSA officials, data from other root cause analysis efforts are not explicitly documented in action plans developed in response to FEVS results because DHS has not included linkage of other root cause analysis efforts to actions items in the FEVS action planning templates used by the components. TSA officials also stated that other root cause efforts (see appendix IV) were used to develop TSA’s July 2012 action plan update. However, the July 2012 plan did not include linkage of root cause findings other than FEVS results, such as exit survey results, to action plan items. ICE officials stated that results from other root cause efforts, such as its FOCS, have not yet been considered in FEVS-based action planning but that ICE plans to do so in future efforts to address morale. The Coast Guard uses information from its OAS as part of a process separate from FEVS-based action planning for addressing morale, so OAS results are not linked to FEVS-based action plans. OCHCO and component human capital officials described several reasons for the variation in root cause analysis of FEVS results. OCHCO officials described resource constraints and leadership changes within the OCHCO position as resulting in a lack of continuity in root cause analysis efforts. For example, one OCHCO official stated that because of resource constraints, OCHCO has focused more efforts on workforce planning than on morale problem analysis since 2009. ICE human capital officials stated that ICE’s human capital services were provided via a contract with CBP until 2010, when the human capital function became an independently funded part of the ICE organization. Only since moving to its current position within ICE has the human capital office been able to devote more resources to addressing morale issues, according to the officials. CBP human capital officials stated that for assessing morale issues, CBP uses both quantitative and qualitative information. However, according to the officials, qualitative evidence is preferable over quantitative survey analysis because focus groups and open-ended surveys, such as the Most Valuable Perspective online survey, allow CBP to better understand the issues affecting employees. Because of CBP human capital officials’ preference for qualitative information, CBP has not emphasized extensive quantitative analysis of survey results, such as statistical analysis that may determine underlying causes of morale problems. Without a complete understanding of which issues are driving low employee morale, DHS risks not being able to effectively address the underlying concerns of its varied employee population. Emphasis on survey analysis that includes demographic group comparisons, benchmarking against similar organizations, and linkage of other analysis efforts outside of FEVS within action plan documentation could assist DHS in better addressing its employee morale problems. DHS and the selected components routinely update their action plans to address employee survey results in accordance with the Office of Management and Budget’s budget guidance; the DHS-wide plan is updated every two years, and components update their plans at least annually. According to OPM’s guide for using FEVS results, action planning involves, among other things, identifying goals and actions for improving low-scoring FEVS satisfaction topics such as reviewing survey results to determine steps to be taken to improve how the agency manages its workforce. DHS-wide and component action plan goals and examples of low-scoring FEVS satisfaction topics are listed in table 2. As part of DHS’s efforts to address our high-risk designation of implementing and transforming DHS, DHS described a plan for improving employee morale in its Integrated Strategy for High Risk Management (Integrated Strategy). In June 2012, DHS provided us with its updated Integrated Strategy, which summarized the status of the department’s activities for addressing its implementation and transformation high-risk designation. In the Integrated Strategy, DHS identified activities to improve employee job satisfaction scores, among other things. The status of the activities included ongoing analysis of the 2011 FEVS results, launch of the EEESC to address DHS scores on the HCAAF indexes, ongoing coordination between the OCHCO and components to develop action plans in response to the 2011 FEVS results, and launch of an online employee survey in the first quarter of fiscal year 2013. Within the Integrated Strategy action plan for improving job satisfaction scores, DHS reported that three of six efforts were hindered by a lack of For example, resources are a constraining factor for DHS’s resources.Office of the Chief Human Capital Officer to consult with components in developing action plans in response to 2011 FEVS results. Similarly, resources are a constraining factor to deploy online focus discussions on job satisfaction-related issues. According to our review of the action plans created in response to the FEVS and interviews with agency officials, DHS and the selected components generally incorporated the six action planning steps suggested by OPM, but the agency does not have effective metrics to support its efforts related to monitoring. (See figure 5.) We found that, in general, DHS and its components are implementing the six steps for action planning as demonstrated in table 3 below. Three attributes relevant to the linkage—determines whether there is a relationship between the performance measure and the goals; clarity—determines whether the performance measures are clearly stated; and measurable target—determines whether, performance measures have quantifiable, numerical targets or other measurable values, where appropriate. In general, DHS and component measures satisfied the linkage attribute but did not address the clarity and measurable targets attributes. We compared DHS and the four components measures of success to the three attributes and found that all 54 measures of success incorporated the linkage attribute, 12 of the 54 measures of success did not address the clarity attribute, and 29 of the 54 measures of success did not address the measurable target attribute. As shown in table 4 below, we found that these measures demonstrate linkage because they align with the action plan goals. However, we determined that the measures demonstrate neither clarity nor a measurable target. Specifically, the measures do not demonstrate clarity because they do not provide enough detail to clearly state the metric used to measure success. They also do not demonstrate a measureable target because they do not list quantitative goals or provide a qualitative predictor of a desired outcome, which would allow the agency to better determine the extent to which they were making progress toward achieving their goals. Officials provided several reasons why their measures of success may fall short of the attributes for successful metrics. According to OCHCO officials, OCHCO considers accomplishment of an action item step as a success and relies on the measures of success listed in its action plan as a metric for whether the action plan items were implemented. OCHCO considers whether positive responses to survey questions noted in the action plan improve over time as the outcome measure for whether action plans are effective. However, as part of its oversight and feedback on component action plans, OCHCO does not monitor or evaluate measures of success for action planning and therefore is not in a position to determine whether the measures reflect improvement. CBP officials stated that they monitor the change in FEVS results overall as the intent of the action planning is to improve their scores on the HCAAF indexes. Coast Guard officials stated that they rely on qualitative feedback from employees on action plan items, such as improved training and website updates, to measure action plan performance. TSA officials stated they assess action plan results by tracking completion dates for action items and updating OCHCO on results at least semi-annually, and ICE officials have stated they have not yet fully developed monitoring efforts to evaluate job satisfaction action planning because the human capital office received funding in the summer of 2011 to implement human capital programs. We acknowledge that positive responses in survey results and positive employee feedback are good indicators that action planning is working. However, until DHS and its components begin to see positive results, it is important for them to (1) understand whether they are successfully implementing the individual steps of their action plans and (2) make any necessary changes to improve on them. By not having specific metrics within the action plans that are clear and measurable, it will be more difficult for DHS to assess its efforts to address employee morale problems, as well as determine if changes should be made to ensure progress toward achieving its goals. Furthermore, effective measures are key to DHS’s action plan as it is part of a process that informs the Office of Management and Budget and OPM of DHS efforts to address survey results. According to an OPM official responsible for federal action planning to improve morale, DHS should carefully consider, for each action step, what success means to the agency, such as increased employee engagement targets. The official said that when success is defined, it should not only be clear and measurable, but should also take into account as many of the different demographic groups evaluated as possible. DHS and the selected components have initiated efforts to determine how other entities approach employee morale issues. DHS officials stated they have started to review and implement what they consider to be best practices for improving employee morale, such as the following: DHS working group—OCHCO leads a survey engagement team that holds monthly meetings during which action planning efforts from across the different components are shared and discussed. Representatives from other federal agencies such as the National Aeronautics and Space Administration and the Federal Aviation Administration have also attended these meetings and presented their action plans for addressing survey results. Idea Factory—a TSA web-based tool adopted by DHS that empowers employees to develop, rate, and improve innovative ideas for programs, processes, and technologies. According to a DHS assessment, the Under Secretary for Management plans to use this tool for internal DHS employee communication so as to promote greater job satisfaction and enhance organization effectiveness. Component officials we interviewed also stated they have started to review, implement, and share what they consider to be best practices for improving morale. For example: ICE officials stated they consult with other agencies and DHS components, such as the U.S. Marshal’s Service, when addressing morale challenges and developing policies and programs. For example, the U.S. Marshal’s Service has a critical incident response program for employees encountering a traumatic event and ICE is exploring adopting a similar program. TSA officials stated that they reached out to Marriott Corporation, CBP, and the National Aeronautics and Space Administration to identify actions for increasing employee rewards and employee confidence in leadership. CBP officials stated they have established several ongoing working groups that routinely meet and share human capital best practices within the agency. One of these working groups has conducted benchmarking work with high-FEVS-scoring federal agencies such as the Social Security Administration, the U.S. Secret Service, the Federal Bureau of Investigation, the Internal Revenue Service and the Nuclear Regulatory Commission. Coast Guard officials stated they share human capital best practices that may improve job satisfaction with other DHS components such as (1) their performance appraisal system which was adopted, in part, DHS-wide; (2) their automated cash award process with FEMA; and (3) Coast Guard training to supervisors with both DHS headquarters officials and FEMA. Given the critical nature of DHS’s mission to protect the security and economy of our nation, it is important that DHS employees are satisfied with their jobs so that DHS can retain and attract the talent required to complete its work. Employee survey data indicate that when compared to other federal employees, many DHS employees report being dissatisfied and not engaged with their jobs. It is imperative that DHS understand what is driving employee morale problems and address those problems through targeted actions that address employees’ underlying concerns. DHS has made efforts to understand morale issues across the department, but those efforts could be improved. Specifically, given the annual employee survey data available through the FEVS, DHS and its components could improve their efforts to determine root causes of morale problems by comparing demographic groups, benchmarking against similar organizations, and linking root cause findings to action plans. Uncovering root causes of morale problems could help identify appropriate actions to take in efforts to improve morale. In addition, DHS has established performance measures for its action plans to improve morale, but incorporating attributes such as improved clarity and measurable targets could better position DHS to determine whether its action plans are effective. Without doing so, DHS will have a more difficult time determining whether it is achieving its goals. To strengthen DHS’s evaluation and planning process for addressing employee morale, we recommend that the Secretary of Homeland Security direct OCHCO and component human capital officials to take the following two actions: examine their root cause analysis efforts and, where absent, add the following: comparisons of demographic groups, benchmarking against similar organizations, and linkage of root cause findings to action plans; and establish metrics of success within the action plans that are clear and measurable. We requested comments on a draft of this report from DHS. On September 25, 2012, DHS provided written comments, which are reprinted in appendix V, and provided technical comments, which we incorporated as appropriate. DHS concurred with our two recommendations and described actions planned to address them. Specifically: DHS stated that it will ensure that department-wide and component action plans are tied to root causes and that the department will conduct benchmarking against other organizations. DHS also stated that its ability to conduct demographic analysis is limited due to the data set OPM makes available to federal agencies. However, according to OPM, DHS has access to the data necessary for conducting analysis similar to our comparison of demographic groups. DHS stated it will review action plans to ensure that each action is clear and measurable. We also requested comments on a draft of this report from OPM. On September 18, 2012, OPM provided a written response, which is reprinted in appendix VI. OPM’s letter indicated that it reviewed the draft report and had no comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Homeland Security, the U.S. Office of Personnel Management, and interested congressional committees. The report also will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report please contact me at (202) 512-9627 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. We conducted a statistical analysis of the 2011 Federal Employee Viewpoint Survey (FEVS) to assess employee morale at the Department of Homeland Security (DHS). Our analysis addressed two specific questions. First, how does morale at DHS and its components compare with morale at other agencies, holding constant demographic differences among employees? Second, to what extent is the morale gap between DHS and other agencies explained by differences in the demographic composition of the DHS workforce versus other unique characteristics of the agency or unmeasured demographic factors? This appendix explains the value of statistical analysis for understanding the employee morale gap, describes the data and methods we used, and provides additional details about our findings, which are summarized in the body of the report. In sum DHS employees with the same demographic profiles (measured by FEVS) were about 7 percentage points less engaged and 6 points less satisfied than non-DHS employees. Demographic differences (measured by FEVS) between DHS and other agencies are unlikely to explain the overall morale gap. Unique features of DHS (or unmeasured demographics) are more likely to be responsible. DHS middle managers and employees with 1 to 10 years of tenure at their components—those hired after the department’s creation—have lower morale than similar employees at other departments. Morale varies widely across DHS components, and some have similar morale as non-DHS agencies. Individual offices can strongly influence the morale gap at the component level. The morale gap is smaller for DHS components that existed before the department was created. The morale gap between DHS and other agencies may be due to unique issues within DHS or common issues faced by all agencies in similar circumstances. Unique issues might include developing an agency-wide culture, the decisions and composition of senior leaders, and the inherent uniqueness of homeland security programs. Common characteristics might include having many law enforcement and front-line customer service occupations, and having employees dispersed among many headquarters and field offices. Determining whether unique or shared issues account for the overall morale gap is important for understanding the cause of the problem. If morale at DHS was not uniquely low, compared with morale at agencies with similar demographics and programs, the agency might learn from peer agencies facing similar challenges. Alternatively, if morale was lower at DHS for reasons unique to the agency, DHS might put more emphasis on understanding its own particular challenges. Distinguishing among these possible explanations can help develop a solution that is narrowly tailored to the problem. Our analysis focused on one group of shared circumstances that might explain the morale gap: employee demographics. If DHS were more likely to employ the types of workers who tend to have lower morale across all agencies of the government, the composition of the workforce might account for the gap to a greater extent than factors specific to DHS. In other words, morale at DHS may be no worse than at other agencies among demographically equivalent employees. Our analysis focused on a limited number of demographic differences, such as location and age, but attitudinal differences about pay, benefits, supervision, training, mentoring, and other human capital issues could be assessed in a similar way. We also considered how large of a morale gap there was between employees in various DHS components and work groups and non-DHS employees. The gap at the department level can mask groups of employees with higher or lower morale. Disaggregating morale into small work groups identifies areas of DHS in which morale may be high or low, and thus provides sufficiently detailed data for focused solutions to the problem. Any analysis of morale in employee surveys is limited by the fact that associations among the variables of interest may not represent cause- and-effect relationships. Nevertheless, a limited observational analysis remains useful for evaluating human capital programs. Since federal agencies cannot easily conduct high-quality randomized controlled trials of various approaches to managing their employees, the use of observational methods is common, often in the form of quantitative survey analyses or qualitative interviews and focus groups. We have previously found that a pragmatic approach to answering necessary policy questions, using the best methods and data that are feasible, is widely supported by academic experts and practitioners in policy analysis. Moreover, statistical theory has shown that observational methods can estimate cause-and-effect relationships in certain conditions. Associations between morale and demographic characteristics are useful for understanding the operation of human capital programs, when interpreted cautiously and in the context of all the available evidence. Our analysis here describes patterns across the demographic groups identified in the 2011 FEVS and determines whether the aggregate differences between DHS and other agencies persists among demographically similar employees. We make no causal interpretations of these relationships, and our approach is only one that might be valid and useful. The Office of Personnel Management (OPM) provided us with a version of the 2011 FEVS that included more detailed demographic and organizational data than the file it released to the public. Specifically, our file contained the same variables as the public file but identified more detailed groups of employees. The 2011 survey included responses from 266,376 full-time, permanent federal employees, working for agencies that, according to OPM, constituted 97 percent of the executive branch workforce. OPM sampled employees within strata formed by supervisory status and organizational subgroup (e.g., component and work group).This produced generally large sample sizes even for many small work groups within components, which allowed us to analyze morale among small groups of employees with an acceptable degree of precision. We focused on two types of variables in the FEVS: (1) employee demographics and (2) OPM’s Employee Engagement and Job Satisfaction indexes. A series of questions at the end of the survey collected the demographic data, rather than preexisting administrative records. OPM reported independently developing and validating the engagement indexes using factor-analytic procedures, which are common psychometric statistical methods. The survey items that made up each index used five-point, Likert-type scales, with “agree/disagree,” “satisfied/dissatisfied,” or “good/poor” response options. We used weights provided by OPM to calculate estimates and sampling variances for all analyses. The weights were the product of the unequal sampling probabilities across strata and non-response and post- stratification adjustments. Because some strata had relatively small population sizes—one-quarter with 18 employees or fewer—we corrected for finite populations. One explanation for lower morale at DHS is that its employees could be members of demographic groups that typically have lower morale across all agencies. If this is true, the cause of morale problems and their solutions might focus less on factors that are unique to DHS and more on approaches that apply to any agency with a similar workforce. Table 5 provides basic evidence to help assess the demographic explanation. The table presents the average OPM Engagement Index for several demographic groups in the 2011 FEVS. If engagement problems at DHS were isolated to particular subgroups of employees, the morale gap should vary widely across those subgroups. In fact, engagement at DHS is lower (or statistically indistinguishable from zero) than at other agencies in each demographic subgroup we analyzed, and the gap relative to DHS does not vary by large amounts across most subgroups. However, the gap is somewhat larger among employees who were in certain subgroups, such as those who had 4 to 10 years of experience with their components and who worked outside of headquarters. We developed several statistical models to further assess the demographic explanation. These models held constant the demographic profiles of DHS and non-DHS employees, in order to isolate the portion of the morale gap that was specifically due to non-demographic factors. The models allowed us to compare morale at DHS and other agencies among employees who were in the same demographic groups, as measured by the FEVS. To avoid methodological complications with modeling latent variables, we created a binary measure that identified whether a respondent was engaged or satisfied on each item in the respective scales. Our measure equaled 1 if the respondent gave positive answers (4 or 5) to each item in the index and 0 if the respondent gave neutral or negative responses (1,2, or 3) to at least one item. Collapsing the scale loses some information, since morale and satisfaction are continuous, latent variables. However, a collapsed measure provides some degree of comparability between OPM’s aggregate indices and our individual-level analysis, since the OPM’s indices also collapse the scale. The differences among agencies and subgroups of employees are generally similar using either our measure or OPM’s. We focused on the associations between broad measures of morale and fixed demographic characteristics available in the 2011 FEVS. Fixed demographics and broad measures of satisfaction are not subject to artificially high correlations that a survey’s design can produce among attitudinal measures. ) (1) ) (2) Moraleij indicates whether employee i at agency j was engaged or satisfied, using the binary measure we calculated from the survey items that make up the OPM indexes (see above). DHS indicates whether the employee worked for DHS, Demogij is a vector of demographic indicators (listed in table 6), Λ is the logistic function, and α and β are vectors of coefficients that estimate how morale varied among employees in different demographic groups. We included all demographic factors measured by the FEVS that plausibly could have predicted morale and were clearly causally prior to morale. We excluded pay group, however, because of its high correlation with supervisory status. Model 2 allows DHS and non-DHS employees in the same demographic groups to have different levels of morale, as described by Dβ and Gβ .We estimated each model using cluster-robust maximum likelihood methods, with 365 agency clusters (e.g., Transportation Security Administration ). Our multivariate analysis found that DHS employees remained an average of 6.4 percentage points less engaged (+/- 3.2) (see table 6) and 5.5 points less satisfied (+/- 2.2) (not shown) on our scales than employees at other agencies who had the same age, office location, race, sex, supervisory status, and tenure. This suggests that measured demographic differences between employees at DHS and other agencies do not fully explain the morale gap. Instead, factors that are intrinsic to DHS, such as culture or management practices, or demographic factors not measured by FEVS, such as education or occupation, are likely to be responsible. We can further explore the roles of demographics and unique DHS characteristics by performing an Oaxaca decomposition of the results of model 2, in order to compare DHS with other agencies. Oaxaca decomposition can assess whether the overall morale gap is explained by the demographic characteristics of DHS employees, or whether it is explained by lower morale among DHS employees in the same demographic groups. In other words, does DHS employ an unusually large number of workers who tend to have low morale across all agencies, or do workers with the same backgrounds have uniquely lower morale at DHS? As shown in table 6, the model suggests that the demographic profile of DHS employees (measured by FEVS) tends to slightly increase their engagement and reduce the gap compared with employees at other agencies. The demographic characteristics we can observe in FEVS reduce the overall gaps in the proportion engaged and satisfied on our scales by 0.1 and 1.0 percentage points, respectively. Instead, the morale gap is better explained by unique differences in morale between DHS and other agencies among demographically similar employees. Such intrinsic differences increase the gaps in the proportion engaged and satisfied by 6.4 and 5.5 percentage points, respectively. If the demographic profile of the DHS workforce did not change, but DHS could achieve the same levels of morale as other agencies from the same types of employees, our model predicts that DHS employees would not have lower morale than employees at other agencies. DHS employees with lower-level positions and component tenure were among those with lower morale, relative to employees in other agencies. As shown in figures 6 and 7, our measures of engagement and satisfaction generally increased with seniority and decreased with tenure, among employees at DHS and other agencies. At DHS, however, morale increased more slowly as employees gained more seniority, and it declined more quickly as they spent more time at the agency. For example, the average newly hired employee at DHS and similar employees at other agencies had statistically indistinguishable levels of engagement. By their sixth years, however, satisfaction for the DHS employee declined to an average of 18 percentage points, whereas satisfaction for the non-DHS employees declined to an average of only 26 percentage points. A similar pattern exists with respect to supervisory status (see figures 6 and 7). These patterns are particularly important for explaining the overall morale gap, because DHS had about 30 percent more supervisors and about twice as many people with 6 to 10 years of component tenure (as a share of all employees), compared with people at other agencies (according to FEVS). Low employee morale is not a uniform problem throughout DHS. As shown in table 7, engagement varies widely across components within the department, with employees in some components not being significantly different from the average employee at non-DHS agencies. These components include the U.S. Coast Guard (Coast Guard), Federal Law Enforcement Training Center (FLETC), Management Directorate (MGMT), and U.S. Secret Service (USSS). Job satisfaction at these components also matches or exceeds that found at other agencies (not shown in table 7). DHS has a number of components whose employees have substantially lower morale than employees at other agencies and elsewhere in the department. The large share of DHS employees working in these components accounts for the overall morale gap between DHS and other agencies. Components with lower morale include Federal Emergency Management Agency (FEMA), Immigration and Customs Enforcement (ICE), Intelligence and Analysis (IA), National Protection and Programs Directorate (NPPD), Science and Technology (ST), and the TSA. The engagement scores of these components range from 9.1 to 13.9 percentage points lower than the average score for non-DHS agencies (see table 7). As a group, these components make up 46 percent of the employees interviewed for the FEVS. Consequently, the components with substantially lower morale have a large influence on the gap relative to the rest of the government, despite the fact that morale at many smaller DHS components is no worse. Morale at some of the less engaged and satisfied components is, in turn, strongly influenced by particular employee workgroups (see table 7). For example, the average engagement at TSA is 12.8 percentage points (apart from rounding) lower than at non-DHS agencies. Within TSA, however, the collectively large groups of air marshal, law enforcement, and screening workers account for much of the overall difference. A similar pattern applies to the enforcement, removal, and homeland security investigation staffs at ICE, the field operations staff at CBP, and the Federal Protective Service. Such variation within components further suggests that the morale gap is isolated to particular areas within DHS that account for a large proportion of its workforce. At other components, morale is more uniformly lower across most offices. Average engagement at all work groups within FEMA is 5.8 to 17.7 percentage points lower than the non-DHS average, with the exception of two regional offices and the offices of the Administrator and Chief of Staff. The components of ST and IA also have more consistently low morale across work groups. One explanation for why morale varies across components focuses on the length of time each organization has existed. Components that existed prior to the creation of DHS may have had more time to develop successful cultures and management practices than components that policymakers created with the department in 2003. As a result, the preexisting components may have better morale today than components with less mature cultures and practices. To assess this explanation, we analyzed morale among two groups of components, divided according to whether the component was established with the creation of DHS or existed previously (see table 8). We considered three components to be preexisting—FLETC, USSS, and the Coast Guard—and the rest to be newly created. Because TSA was created about 2 years before DHS, we included it with components that were created with DHS. Our analysis shows that employees at the more recently created components were less engaged and satisfied on average than employees at the preexisting components and at non-DHS agencies. For the preexisting components, engagement was about 2.2 percentage points higher than at the rest of the government, and the difference in satisfaction was small (less than 1.4 percentage points). In contrast, engagement and satisfaction at the more recently created components were about 8 and 5.1 percentage points lower than at the rest of the government, respectively. We developed a statistical model to confirm whether the differences among components persist, holding constant demographic differences among their employees. In an alternative version of model 1 above, we replaced DHS with a vector of variables indicating whether the employee worked for DHS components or at an agency other than DHS. All other parts of the model were identical. The model estimates generally confirmed the differences in engagement between non-DHS and DHS component employees in the raw data (see table 9), with two exceptions. The model estimated that, holding constant demographic differences, employees in the Management Directorate and Office of the Secretary were 6.9 and 7.7 percentage points less engaged on average than employees in non-DHS agencies. This suggests that the engagement gap for employees in these offices is more similar to the gap at other offices, holding constant the demographic differences among offices measured by FEVS. The model estimated that differences in satisfaction between the components and non-DHS agencies were generally similar to such differences in engagement (see table 9). The fact that differences among components remained, even among demographically equivalent employees, suggests that either unmeasured demographic variables or intrinsic characteristics of the components are responsible for the differences in morale. Our analysis discussed in this appendix has a narrow scope: assessing whether demographic differences among employees explain the morale differences across DHS and non-DHS employees. Consequently, DHS or others could expand and improve upon our findings. Future work could examine whether attitudinal differences among employees at DHS and other agencies explain the overall morale gap, in addition to demographic differences. The 2011 FEVS measures employee attitudes about pay, benefits, health and safety hazards, training, supervisors, and other issues that could vary meaningfully between employees at DHS and other agencies and, therefore, explain why DHS has lower morale. One might include these factors in a decomposition similar to the one we performed in this appendix. This could further assess how factors unique to DHS and factors that are common across all agencies explain the overall morale gap. A broader attitudinal analysis likely would require the use of more sophisticated statistical methods for estimating the values of and relationships among latent variables. The broad measures of morale we analyze in this appendix, such as the OPM Employee Engagement index, are made up of responses to questions on smaller dimensions, such as leadership and supervision. To avoid simply replicating the correlations that were used to create the indexes, latent variable models could be useful to examine the relationships among these concepts and compare morale on latent scales between DHS and non-DHS agencies. This was beyond the scope of our work. The objectives for this report were to evaluate (1) how DHS employee morale compares with that of other federal government employees and (2) to what extent DHS and its selected components determined the root causes of employee morale and developed action plans to improve morale. To address our objectives, we evaluated both DHS-wide efforts and efforts at four selected components to address employee morale—CBP, ICE, TSA, and the Coast Guard. We selected the four DHS components based on their workforce size and how their 2011 job satisfaction and engagement index scores compare with the non-DHS average. The components selected had scores both above, below, and similar to the average: TSA—below average on both indexes, constituting 25 percent of the DHS workforce; ICE—below average on both indexes, accounting for 9 percent of the DHS workforce; CBP—at the non-DHS average for satisfaction and below on engagement, representing 27 percent of the DHS workforce; and the civilian portion of the Coast Guard—at the non- DHS average for satisfaction and above on engagement, composing 4 percent of the DHS workforce. Together these components represent 65 percent of DHS’s workforce. To evaluate how DHS’s employee morale compares with that of other federal government employees, we analyzed employee responses to the 2011 FEVS. We determined that the 2011 FEVS data were reliable for the purposes of our report, based on interviews with OPM staff, review and analysis of technical documentation of its design and administration, and electronic testing. We used two measures created by OPM—the employee job satisfaction and engagement indexes—to describe morale across the federal government and within DHS. We calculated these measures for various demographic groups, DHS components, and work groups, in order to compare morale at DHS and other agencies among employees who were demographically similar, in part using statistical models. Appendix I describes our methods and findings in more detail. In addition, we interviewed employee groups about morale to identify examples of what issues may drive high and low morale within DHS. We selected the employee groups based on the size of the employee group within each selected component, ensuring we met with employees from employee groups that composed significant proportions of FEVS respondents, such as screeners from TSA (61 percent of TSA respondents) and homeland security investigators from ICE (33 percent of ICE respondents). The comments received from these interviews are not generalizable to entire groups of component employees, but provide insights into the differing issues that can drive morale. To determine the extent to which DHS and the selected components identified the root causes of employee morale and developed action plans for improvements, we reviewed analysis results, interviewed agency human capital officials and representatives of employee groups, and evaluated action plans for improving morale. To identify criteria for determining effective root cause analysis using survey data, we reviewed both OPM and Partnership for Public Service guidance for action planning based on annual employee survey results. On the basis of these guidance documents, we identified factors that should be considered in employee survey analysis that attempts to understand morale problems, such as use of demographic group comparisons, benchmarking results against results at similar organizations, and the linking results of root cause analyses to action planning efforts. We evaluated documents summarizing DHS-wide and selected component root cause analyses of the 2011 FEVS to determine whether the factors we identified were included in the analyses. In addition, we interviewed DHS officials who conducted the analyses in order to fully understand root cause analysis efforts. To identify criteria for determining agency action plans we reviewed OPM guidance for using FEVS results and previous GAO work indicating agencies’ success in measuring performance. On the basis of these guidance documents, we identified OPM’s six steps that should be considered in developing action plans and identified three attributes that were relevant for measuring action plan performance—linkage, clarity, and measurable target. We compared the action plans with these criteria to determine whether these items were included in the action plans. In addition, we interviewed DHS and component officials to identify efforts to leverage best practices for improving morale. We conducted this performance audit from October 2011 through September 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Since 2007 DHS’s Office of the Chief Human Capital Officer (OCHCO) has completed several efforts to determine root causes of morale DHS- wide. Focus groups. In 2007 OCHCO conducted focus groups to determine employee concerns related to employee morale. DHS’s focus group effort probed for insights into four areas—(1) leadership, (2) communication, (3) empowerment, and (4) resources—and highlighted concerns raised by focus group participants in each of those areas. For example, within the leadership area, OCHCO’s focus group analysis found that the Customs and Immigration reorganization was a topic discussed by many of the U.S. Customs and Border Protection (CBP), U.S. Immigration and Customs Enforcement (ICE), and Citizenship and Immigration Services (CIS) personnel, especially what they felt was a lack of mission understanding on the part of their managers. According to the analysis, non-supervisory participants expressed dissatisfaction with the combination of three types of inspection functions to present “one face at the border.” One Face at the Border For operations at ports of entry, in September 2003 CBP issued its plan for consolidating the inspection functions formerly performed by separate inspectors from the three legacy agencies—customs inspectors from U.S. Customs, immigration inspectors and Border Patrol from the former Immigration and Naturalization Service, and the agriculture border inspectors from the Department of Agriculture’s Animal and Plant Health Inspection Service. The plan, referred to as “One Face at the Border,” called for unifying and integrating the legacy inspectors into two new positions—a CBP officer and a CBP agricultural specialist. The new CBP officer would serve as the frontline officer responsible for carrying out the priority anti-terrorism mission as well as the traditional customs and immigration inspection functions while also identifying and referring goods in need of a more extensive agricultural inspection to the agricultural specialist. CBP anticipated that having a well-trained and well-integrated workforce that could carry out the complete range of inspection functions involving the processing of individuals and goods would allow it to utilize its inspection resources more effectively and enable it to better target potentially high-risk travelers. Together, CBP envisioned the result to be more effective inspections and enhanced security at ports of entry while also accelerating the processing of legitimate trade and travel. Focus group results were distributed to DHS components for consideration in action planning efforts, according to OCHCO officials. CBP, CIS, TSA, the Federal Emergency Management Agency (FEMA), and the Federal Law Enforcement Training Center each addressed at least one of the focus group results relating to leadership, communication, empowerment, or resources in subsequent action plans, according to OCHCO officials. Statistical analysis. In 2008 OCHCO performed statistical analysis of Federal Employee Viewpoint Survey (FEVS) data, beyond examining high- and low-scoring questions, in an effort to determine what workplace factors drove employee job satisfaction. Specifically, the analysis involved isolating which sets of FEVS questions most affect employee job satisfaction. The analysis found that five work areas identified in FEVS questions drive employee job satisfaction: (1) performance and rewards, (2) supervisor support, (3) physical conditions and safety, (4) senior leadership effectiveness, and (5) the DHS mission. According to OCHCO officials, DHS components were encouraged to conduct follow-up discussions at the lowest possible organizational level based on component survey scores in each of the five work areas. However, OCHCO officials stated that they are not aware of any results of this effort because OCHCO did not track or follow-up with the components on the effect of key driver discussions that may have occurred. In addition, increased emphasis on supervisor performance management training was also implemented as a result of the analysis, according to OCHCO officials. Exit survey. In 2011, DHS began administering an exit survey to understand why employees choose to leave their DHS positions. Specifically, according to OCHCO officials, the DHS exit survey was designed to determine where departing employees were moving both inside and outside of DHS, to identify barriers related to diversity, to identify reasons that veterans may be leaving DHS, and to capture feedback from interns. The 2011 exit survey found, among other things, that 27 percent of departing employees who responded to the exit survey were staying within DHS or moving to a different position, and an additional 12 percent of respondents were retiring. Lack of quality supervision and advancement opportunities were the top reasons responding employees indicated for leaving their positions. Exit survey results are shared with DHS components on a quarterly and annual basis. 2011 FEVS analysis. For the 2011 FEVS, DHS’s OCHCO evaluated the results by comparing Human Capital Assessment and Accountability Framework (HCAAF) index results by component. The analysis showed where the lowest index scores were concentrated. As shown in figure 8, lower scores across the indexes were concentrated among several components, including Intelligence and Analysis, Transportation and Security Administration (TSA), ICE, National Protection and Programs Directorate, and FEMA. The analysis also determined how DHS’s scores on the four indexes trended over time and compared with governmentwide averages. As shown in figure 9, DHS-wide scores have generally trended upward over time, but continue to lag behind governmentwide averages for each index. Employee Engagement Executive Steering Committee (EEESC). In January 2012 the DHS Secretary directed all component heads to take steps to improve employee engagement through launch of the EEESC. According to OCHCO officials, the EEESC was launched in response to congressional concerns about DHS employee morale and the Partnership for Public Service results showing DHS’s low placement on the list of Best Places to Work. The EEESC is charged with serving as the DHS corporate body responsible for identifying DHS-wide initiatives to improve employee engagement, oversee the efforts of each DHS component to address employee engagement, and provide periodic reports to the Under Secretary for Management, Deputy Secretary, and Secretary on DHS-wide efforts to improve employee morale and engagement. Specifically, the Secretary made the following directives to component heads: develop and assume responsibility for employee engagement improvement plans, identify and assign specific responsibilities for improved employee engagement to component senior executive performance objectives, identify and assign a senior accountable official to serve on the EEESC, conduct town hall meetings with employees, attend a Labor-Management Forum meeting, and provide monthly reports on actions planned and progress made to the Office of the Chief Human Capital Officer. As of August 2012, each of the Secretary’s directives were completed, with the exception of assigning responsibilities for improved employee engagement to Senior Executive performance objectives, which DHS plans to implement in October 2012 as part of the next senior executive performance period. The EEESC met in February 2012, and component representatives shared their latest action plans and discussed issues of joint concern. In preparation for the 2012 FEVS, the EEESC released a memorandum from the Secretary describing the responsibilities of the EEESC, highlighting department actions, and encouraging employee participation in the FEVS, which began in April 2012. The EEESC also agreed that a corresponding message should be released from component heads outlining specific component actions taken in response to past survey results and encouraging participation in the next survey. In an April 2012 EEESC meeting, the Partnership for Public Service provided a briefing describing the Best Places to Work in the Federal Government rankings and best practices across the government for improving morale scores. The EEESC members also discussed methods for improving the response rates for the upcoming survey and engaged in an action planning exercise designed to help identify actions for department-wide deployment, according to OCHCO officials. As of August 2012, EEESC action items were in development and had not been finalized. According to OCHCO officials, the EEESC plans to decide on action items by September 2012, but a projected date for full implementation has yet to be established because the actions have not been decided upon. In addition to the DHS-wide efforts, the components we selected for review—ICE, TSA, the U.S. Coast Guard (Coast Guard), and CBP— conducted varying levels of analyses regarding the root causes of morale issues to inform agency action planning efforts. The selected components each analyzed FEVS data to understand leading issues that may relate to morale, but the results indicated where job satisfaction problem areas may exist and do not identify the causes of dissatisfaction within employee groups. A discussion of the four selected components’ 2011 FEVS analysis and results are described below. TSA. In its analysis of the 2011 FEVS, TSA focused on areas of concern across groups, such as pay and performance appraisal concerns, and also looked for insight on which employee groups within TSA may be more dissatisfied with their jobs than others by comparing employee group scores on satisfaction-related questions. TSA compared its results with CBP results, as well as against DHS and governmentwide results. When comparing CBP and TSA scores, TSA found that the greatest differences in scores were on questions related to satisfaction with pay and whether performance appraisals were a fair reflection of performance. TSA scored 40 percentage points lower on pay satisfaction and 25 percentage points lower on performance appraisal satisfaction. In comparing TSA results with DHS and governmentwide results, TSA found that TSA was below the averages for all FEVS dimensions. TSA also evaluated FEVS results across employee groups by comparing dimension scores for headquarters staff, the Federal Air Marshals, Federal Security Director staff, and the screening workforce. TSA found that the screening workforce scored at or below scores for all other groups across all of the dimensions. ICE. In its analysis of the 2011 FEVS, ICE analyzed the results by identifying ICE’s FEVS questions with the top positive and negative responses. ICE found that its top strength was employees’ willingness to put in the extra effort to get a job done. ICE’s top negative result was employees’ perceptions that pay raises did not depend on how well employees perform their jobs. ICE also sorted the primary low-scoring results into action planning themes, such as leadership, empowerment, and work-life balance. ICE found, among other things, that employee views on the fairness of its performance appraisals were above DHS’s average but that views on employee preparation for potential security threats were lower. When comparing ICE’s results with average governmentwide figures, ICE found, among other things, that ICE was lower on all of the HCAAF indexes, including job satisfaction. According to ICE human capital officials, future root cause analysis plans for the 2012 FEVS are to benchmark FEVS scores with those of similar law enforcement agencies such as the Drug Enforcement Agency; Federal Bureau of Investigation; Federal Law Enforcement Training Center; United States Secret Service; Alcohol, Tobacco and Firearms, and the U.S. Marshals. CBP. In its analysis of the 2011 FEVS, CBP focused its analysis on trends since 2006. For example, the analysis showed that CBP increased its scores by 5 or more percentage points for 36 of the 39 core FEVS questions. CBP highlighted its greatest increases in HCAAF areas, such as results-oriented performance, which showed a 21 percent improvement over 2006 responses to the question—my performance appraisal is a fair reflection of my performance. The analysis also identified areas in greatest need of improvement, which showed progress since 2006 but continued low scores, such as questions on dealing with poor performers who cannot or will not improve (28 percent positive), promotions based on merit (28 percent positive) and differences in performance are recognized (34 percent positive). Coast Guard. In its review of high and low 2011 FEVS responses, the Coast Guard identified employee responses to two questions that warranted action planning items—(1) How satisfied are you with the information you receive from management on what’s going on in your organization (53 percent positive) and (2) My training needs are assessed (51 percent positive).additional FEVS analyses that were used to inform action planning. Appendix IV: Selected Components’ Data Sources for Evaluating Morale, Other than the Federal Employee Viewpoint Survey Purpose Identify why employees leave the agency and where they are going. Summary of results and how used The number of exit survey respondents from ICE was too low to identify any results and have not been used to address morale as of June 2012, according to ICE officials. Last conducted in March 2012, the FOCS is a data-gathering tool for addressing the extent to which employees perceive their organizational culture as one that incorporates mutual respect, acceptance, teamwork, and productivity among individuals who are diverse in the dimensions of human differences. Additionally, ICE conducts focus groups and individual one-on-one interview sessions to obtain clarifying information pertaining to the FOCS results and written comments. The survey showed low employee perceptions of ICE as an organization where people trust and care for each other, relative to the federal average, according to ICE officials. The results from the FOCS and feedback from the focus groups and individual one-on-one interview sessions are provided to ICE program offices with recommended strategies to improve the program office’s organizational climate. Conducted in 2007, focus groups were launched in response to the 2006 annual employee survey results, which showed CBP below DHS and governmentwide averages. The focus groups identified employees’ perceived problems in specific work environment areas, such as leaders lacking supervisory or communication skills. Among other things, the issues identified by focus group participants allowed CBP to develop action plans that addressed these issues, according to CBP officials. Most Valuable Perspective online survey (MVP) Launched in 2009, this survey was implemented to solicit employee opinions on one topic per quarter as a mechanism for gathering further insights on FEVS results. The MVP was implemented as a continuation of the CBP focus groups completed in 2007. In the July 2012 MVP, which solicited employee preferences for future CBP webcasts to employees, employees suggested retirement planning and financial management as their top two preferences. CBP’s action plan planning process in response to FEVS results includes consideration of MVP results, according to CBP officials. Data source U.S. Office of Personnel Management Organizational Assessment Survey (OAS) Purpose Beginning in 2002, in order to provide the granularity, detail, and reliability needed to ensure the best organizational value, the Coast Guard adopted the OAS as its primary personnel attitude survey, according to Coast Guard officials. The OAS is administered to military (active and reserve) and civilian personnel biennially. Summary of results and how used OPM’s report to the Coast Guard on the 2010 OAS results identified seven strong organizational areas (diversity, teamwork, work environment, leadership and quality, communication, employee involvement and supervision) and three areas for improvement (innovation, use of resources, and rewards/recognition). Coast Guard unit commanders and headquarters program managers use the OAS to support overall Coast Guard improvement. This improvement is achieved by feeding results of the OAS to Coast Guard Unit Commanders and Program Managers who then use OAS results in conjunction with other information as part of routine unit and program leadership and management. Identify why employees leave the agency, launched in 2005. Top reasons for leaving overall were personal reasons, career advancement, management, schedule, and pay. Each quarterly report includes actions managers should take to reduce turnover. A real-time reporting system is also available for each airport and office within TSA so managers can gain access to their results and use them to reduce turnover and make improvements, according to DHS officials. Results from the exit survey were also used by TSA officials in updating TSA’s action plan, according to TSA officials. However, the July 2012 action plan did not link exit survey findings to action items. An online tool for gathering employee suggestions for agency improvement. Each week, approximately 4,000 TSA employees log on to rate, comment, or search, or to submit ideas of their own. The Idea Factory team reviews all submissions and uses Idea Factory challenges to implement solutions to issues. Results were not available for our evaluation. Results were not available for our evaluation. Purpose Provides informal problem resolution services with the mission of promoting fair and equitable treatment in matters involving TSA, according to TSA officials. The Ombudsman assists customers by identifying options, making referrals, explaining policies and procedures, coaching individuals on how to constructively deal with problems, facilitating dialogue, and mediating disputes. Summary of results and how used Results were not available for our evaluation. Each airport and TSA headquarters has an employee advisory council made up of elected members who work on understanding and addressing a variety of workplace issues. Results were not available for our evaluation. In addition to the contact named above, Dawn Locke (Assistant Director), Sandra Burrell (Assistant Director), Lydia Araya, Ben Atwater, Tracey King, Kirsten Lauber, Jean Orland, Jessica Orr, and Jeff Tessin made key contributions to this report. | DHS is the third largest cabinet-level department in the federal government, employing more than 200,000 staff in a broad range of jobs. Since it began operations in 2003, DHS employees have reported having low job satisfaction. DHS employee concerns about job satisfaction are one example of the challenges the department faces implementing its missions. GAO has designated the implementation and transformation of DHS as a high risk area, including its management of human capital, because it represents an enormous and complex undertaking that will require time to achieve in an effective and efficient manner. GAO was asked to examine: (1) how DHS's employee morale compared with that of other federal employees, and (2) the extent to which DHS and selected components have determined the root causes of employee morale, and developed action plans to improve morale. To address these objectives, GAO analyzed survey evaluations, focus group reports, and DHS and component action planning documents, and interviewed officials from DHS and four components, selected based on workforce size, among other things. Department of Homeland Security (DHS) employees reported having lower average morale than the average for the rest of the federal government, but morale varied across components and employee groups within the department. Data from the 2011 Office of Personnel Management (OPM) Federal Employee Viewpoint Survey (FEVS)--a tool that measures employees' perceptions of whether and to what extent conditions characterizing successful organizations are present in their agencies--showed that DHS employees had 4.5 percentage points lower job satisfaction and 7.0 percentage points lower engagement in their work overall. Engagement is the extent to which employees are immersed in their work and spending extra effort on job performance. Moreover, within most demographic groups available for comparison, DHS employees scored lower on average satisfaction and engagement than the average for the rest of the federal government. For example, within most pay categories DHS employees reported lower satisfaction and engagement than non-DHS employees in the same pay groups. Levels of satisfaction and engagement varied across components, with some components reporting scores above the non-DHS averages. Several components with lower morale, such as Transportation Security Administration (TSA) and Immigration and Customs Enforcement (ICE), made up a substantial share of FEVS respondents at DHS, and accounted for a significant portion of the overall difference between the department and other agencies. In addition, components that were created with the department or shortly thereafter tended to have lower morale than components that previously existed. Job satisfaction and engagement varied within components as well. For example, employees in TSA's Federal Security Director staff reported higher satisfaction (by 13 percentage points) and engagement (by 14 percentage points) than TSA's airport security screeners. DHS has taken steps to determine the root causes of employee morale problems and implemented corrective actions, but it could strengthen its survey analyses and metrics for action plan success. To understand morale problems, DHS and selected components took steps, such as implementing an exit survey and routinely analyzing FEVS results. Components GAO selected for review--ICE, TSA, the Coast Guard, and Customs and Border Protection--conducted varying levels of analyses regarding the root causes of morale to understand leading issues that may relate to morale. DHS and the selected components planned actions to improve FEVS scores based on analyses of survey results, but GAO found that these efforts could be enhanced. Specifically, 2011 DHS-wide survey analyses did not include evaluations of demographic group differences on morale-related issues, the Coast Guard did not perform benchmarking analyses, and it was not evident from documentation the extent to which DHS and its components used root cause analyses in their action planning. Without these elements, DHS risks not being able to address the underlying concerns of its varied employee population. In addition, GAO found that despite having broad performance metrics in place to track and assess DHS employee morale on an agency-wide level, DHS does not have specific metrics within the action plans that are consistently clear and measurable. As a result, DHS's ability to assess its efforts to address employee morale problems and determine if changes should be made to ensure progress toward achieving its goals is limited. GAO recommends that DHS examine its root cause analysis efforts and add the following, where absent: comparisons of demographic groups, benchmarking, and linkage of root cause findings to action plans; and establish clear and measurable metrics of action plan success. DHS concurred with our recommendations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
DOD has been trying to successfully implement the working capital fund concept for over 50 years. However, Congress has repeatedly noted weaknesses in DOD’s ability to use this mechanism to effectively control costs and operate in a business-like fashion. The Secretary of Defense is authorized by 10 U.S.C. 2208 to establish working capital funds. The funds are to recover the full costs of goods and services provided, including applicable administrative expenses. The funds generally rely on sales revenue rather than direct appropriations or other funding sources to finance their operations. This revenue is then used to procure new inventory or provide services to customers. Therefore, in order to continue operations, the fund should (1) generate sufficient revenue to cover the full costs of its operations and (2) operate on a break- even basis over time–that is, not have a gain or incur a loss. In fiscal year 2001, the Defense Working Capital Fund—which consisted of the Army, Navy, Air Force, Defense-wide, and Defense Commissary Agency working capital funds—was the financial vehicle used to buy about $70 billion in defense commodities including fuel. The Defense Energy Support Center, as a subordinate command of DLA, buys fuel from oil companies for its customers. Military customers primarily use operation and maintenance appropriations to finance these purchases. In fiscal year 2001, reported fuel sales totaled about $4.7 billion, with the Air Force being the largest customer, purchasing about $2.7 billion. Each year the Office of the Under Secretary of Defense (Comptroller) faces the challenge of estimating and establishing a per barrel price for its fuel and other fuel-related commodities that will closely approximate the actual per barrel price during budget execution, almost a year later. The Office of the Under Secretary of Defense (Comptroller) establishes the stabilized annual price based largely upon the market price of crude oil as estimated by the Office of Management and Budget, plus a calculated estimate of the cost to refine. To this price is added other adjustments directed by Congress or DOD and a surcharge for DLA overhead and the operational costs of the Defense Energy Support Center. The services annually use these stabilized prices and their estimated fuel requirements based on activity levels (such as flying hours, steaming days, tank miles, and base operations) in developing their fuel budget requests. Figure 2 generally illustrates the process and the main organizations involved in budgeting for fuels. The stabilized annual fuel prices computed by DOD have varied over the years, largely due to volatility in the price of crude oil. For example, the stabilized annual fuel price and the Office of Management and Budget’s estimated crude oil price, on which the stabilized price was based for fiscal years 1993 through fiscal year 2003, are shown in figure 3. The stabilized fuel price for each budget year remains unchanged until the next budget year, to provide price stability during budget execution. According to DOD’s Financial Management Regulation, differences between the budget year price and actual prices occurring during the execution year should increase or decrease the next budget year’s price. However, according to DOD’s Financial Management Regulation, fund losses can occasionally be covered by obtaining an appropriation from Congress or by transferring funds from another DOD account. DOD is also authorized to move money out of the fund by annual appropriation acts. These acts limit the amount of funds that can be moved and the purposes for which the funds can be used. Specifically, money can only be removed from the fund for higher priority items, based on unforeseen military requirements, than those for which originally appropriated and cannot be used for items previously denied by Congress. These acts also require the Secretary of Defense to notify Congress of transfers made under this authority. The stabilized annual fuel prices used in the services’ budget requests to Congress do not reflect the full cost of fuel because of cash movements (adjustments) and inaccurate surcharges. Therefore, the services’ budgets for fuel may be greater or less than needed and funds for other readiness needs may be adversely affected. Based on our review of Office of Management and Budget and Defense Energy Support Center methodologies, the crude and refined oil price components appeared reasonable (see app. I for details). However, in fiscal years 1993-2002, cash movements into and out of the fund (adjustments) amounting to over $4 billion, while disclosed to Congress in DOD budget documents, were used for other purposes rather than to lower or raise prices. Some of the cash was moved at the direction of Congress and some at the direction of DOD. Congress makes such decisions as part of its budget deliberations. While authorized to move funds, DOD did not provide Congress with any rationale for the movements based on the limitations in the applicable appropriations acts. Identifying the rationale for moving these funds would be helpful to DOD and congressional decisionmakers as part of the budget review process. Removing money from the fund, which could be used to reduce future fuel prices, causes future service appropriations to be higher than they otherwise would be. In addition, the estimated surcharge component of the price used in budgeting was consistently higher than actual; it did not contain all costs; and in some cases, the costs were not adequately supported. Substantial cash movements (adjustments) into and out of the fund, while disclosed to Congress in budget documents, have kept prices from reflecting the full cost of fuel and affected the development of future years’ stabilized annual fuel prices. As a result, the fuel-related portion of the services’ operation and maintenance budgets totaled about $2.5 billion too high in 5 fiscal years and about $1.5 billion too low in another. The cash taken out of the fund went for the services’ operation and maintenance and other nonfuel-related expenses. Further, Congress provided a $1.56 billion emergency supplemental appropriation in fiscal year 2000 to help offset a loss due to a worldwide increase in crude oil prices. This was necessary because DOD had established a stabilized price of $26.04 per barrel but the actual cost that year was $48.58 per barrel. This appropriation allowed DOD to avoid recovering the loss through a price increase. Figure 4 shows the various fuel-related cash movements during fiscal years 1993 through 2002. Table 1 shows the various cash movements out of the working capital fund from fiscal years 1993 through 2002. In total, about $2.5 billion of fuel-generated funds was removed from the fund. Of this amount, $0.5 billion was used to pay for specific nonfuel-related expenses such as the Counter Drug Effort. The remaining $2.0 billion was used to meet the services’ other operation and maintenance needs. In reviewing these cash movements, we noted that DOD had notified Congress. However, when doing so, DOD did not provide rationale for the cash movements based on the law, which stipulates that the authority for such movements may not be used, unless for higher priority items, based on unforeseen military requirements, and where the item for which the funds are requested has not been previously denied by Congress. As a good management practice, such rationale, along with other information, such as the impact on future prices, would serve to provide more visibility to cash movements. In fact, in one instance, the Senate Appropriations Committee disallowed the $125-million request created when DOD moved these funds from the Defense-wide Working Capital Fund to cover Air Force Working Capital Fund losses. The Senate Appropriations Committee Report on the Department of Defense Appropriation Bill, 2002 and Supplemental Appropriations, 2002, stated that it could not support such a cash movement because it was inconsistent with DOD’s existing policies for recovering working capital fund losses. As a result, the committee reduced the appropriation to DOD’s working capital fund by that amount. Table 2 shows the effect of these cash movements on the stabilized annual fuel price if they had been used to lower or raise future year prices. Cash removed in 5 years caused the services’ fuel budgets to be about $2.5 billion higher than necessary because the prices could have been lowered. For example, $800 million removed in fiscal year 2001 caused the stabilized price in fiscal year 2003 to be $7.27 per barrel higher than necessary. As a result, the services’ fiscal year 2003 fuel budgets were overstated by $800 million. However, in fiscal year 2000, a $1.43 billion net cash movement into the fund caused the fiscal year 2002 stabilized price to be $12.99 per barrel lower than necessary to recover the full cost. As a result, the services’ fiscal year 2002 budgets were understated by $1.43 billion. While military service comptroller officials responsible for managing fuel costs for each service stated that they were aware that DOD sets the stabilized annual fuel price that they must use in the budget process, they believed any gains in 1 year were being used to lower future fuel prices. These officials were not aware that funds generated from fuel sales in 1 year were being used to pay for nonfuel-related DOD needs. In their view, lower prices would have allowed them to use more of their operation and maintenance funds for other priorities. The estimated surcharge portion of the price supporting budget requests has not accurately accounted for fuel-related costs consistent with DOD’s Financial Management Regulation. The surcharges were consistently higher than actual but did not include all costs. Furthermore, some costs were not adequately supported. These problems were due to deficient methodologies and record-keeping. As a result the stabilized annual prices and resulting services’ budgets were inaccurate. Consistent surcharge overstatements caused the stabilized annual price of fuel to be higher than necessary and cost customers on average about $99 million annually from fiscal years 1993 through 2001. Our analysis of the surcharge costs shows that the estimated obligations exceeded actual obligations for every year from fiscal years 1993 through 2001 except for fiscal year 1999 as shown in table 3 below. We recognize that variances will occur between estimated and actual surcharge obligations. Differences, however, should be assessed annually and appropriate adjustments made to the next year’s surcharge. We found that no adjustments for these overcharges, as required by DOD’s Financial Management Regulation, were made in fiscal years 1994 through 2001. After we brought this to DOD’s attention, adjustments were made when computing the fuel price for fiscal years 2002 and 2003. The surcharges, however, did not include all required costs. Inventory losses were not included in the surcharge as required by DOD’s Financial Management Regulation. For fiscal years 1993 through 2000, these losses ranged from $12.0 million to $27.5 million a year. Adding these losses would have increased surcharges by about 9 to 23 cents per barrel. While officials stated that inventory losses were a factor in determining the number of barrels to be purchased, this practice does not comply with DOD’s regulation, which stipulates that inventory losses should be included in the surcharge. Our analysis of the estimated surcharge components disclosed that support for some costs was inadequate. We found that DLA had inadequate support for its $40-million annual headquarters overhead charge that is passed on to the Defense Energy Support Center. This amount equated to over 5 percent of the fiscal year 2002 and 7 percent of the fiscal year 2003 surcharges. While DLA has a methodology for allocating its overhead costs to the affected business activities, we could not verify/validate the portion that was assessed to the center. As a result, we could not determine whether the Defense Energy Support Center was charged the appropriate amount. This is of particular concern because in the most recent budget submission for fiscal year 2003, DLA requested a $16.9 million increase in its overhead charges to the center. The Office of the Under Secretary of Defense (Comptroller) refused to grant the increase because it did not believe the increase was merited. Furthermore, the Defense Energy Support Center could not provide support for the $342 million terminal operations component cost for fiscal years 1997 and 1998. There was also about a $2 million difference between supporting documentation and the budgeted amount for depreciation in fiscal year 2001. The Defense Energy Support Center could not support any of the component costs prior to fiscal year 1997. According to officials, this documentation was not maintained during the move to their current location. Fuel prices have not reflected full costs. Fund cash balances have been used by Congress, and to a lesser extent DOD, to meet other budget priorities. Given the volatility in crude oil prices, these cash balances are DOD’s primary means of annually dealing with drastic increases and decreases in fuel costs. Furthermore, DOD has removed cash from the fund without providing Congress with a rationale based on appropriation act language. In one recent instance, Congress reversed one of DOD’s cash movement decisions. DOD also has not calculated surcharges consistent with the governing financial management regulation. To improve the overall accuracy of DOD’s fuel pricing practices, we recommend that the Secretary of Defense direct DOD’s comptroller to: Provide a rationale to Congress, consistent with language in the applicable appropriations act, to support the movement of funds from the working capital fund and to identify the effect on future prices. Require DLA and the Defense Energy Support Center to develop and maintain sound methodologies that fully account for the surcharge costs consistent with DOD’s Financial Management Regulation and maintain adequate records to support the basis for all surcharge costs included in the stabilized annual fuel price. DOD generally concurred with the recommendations, but provided explanatory comments on each one. With regard to our recommendation that it provide Congress the rationale for cash movements, DOD stated that information is already being provided through formal and informal means that it believes are sufficient to report why cash was moved. We recognize this may be occurring; however, we believe that to improve visibility of fund operations, it is reasonable to provide a formal record of the rationale to fully disclose and account for each cash movement. Such a formal record does not exist; therefore, we continue to believe our recommendation is appropriate. In concurring with the recommendation to maintain adequate records, DOD expressed concern about how long to retain them and proposed 5 years. We believe DOD’s proposal represents a reasonable timeframe consistent with our recommendation. In its cover letter conveying the recommendations, DOD stated our report overlooks the fact that while covering gains or losses to the fund by either decreasing or increasing fuel prices the next year is a basic principle, it is not often practical to rely exclusively on this principle when establishing such prices because of transfers into and out of the fund. We disagree. While our report points out that under the working capital fund concept fuel prices should cover gains and loses, it also acknowledges that there have been numerous transfers. Our point is that to ensure fund accountability when such transfers occur, DOD’s fuel pricing practices should include providing Congress a full disclosure of the rationale for the transfer and its impact on the price. Otherwise, the ability of the working fund to effectively control and account for costs of goods and services is compromised. DOD’s comments are printed in appendix II. DOD also provided technical comments, which we have incorporated as appropriate. We performed our review in accordance with generally accepted government auditing standards. Further details on our scope and methodology can be found in appendix I. We are sending copies of this report to the Senate Committee on Governmental Affairs; House Committee on Government Reform; Senate and House Committees on the Budget; and other interested congressional committees; the Secretary of Defense; and the Director, Defense Logistics Agency. Copies will also be made available to others upon request. In addition, the report will be available at no cost on the GAO Web site at http://www.gao.gov. If you or your staff have questions concerning this report, please contact us on (202) 512-8412. Staff acknowledgements are listed in appendix III. In assessing the accuracy of DOD’s stabilized annual fuel prices from fiscal years 1993-2003, we reviewed each of the four components—crude oil cost estimates, cost to refine, adjustments, and surcharges—and identified the major offices, DOD organizations, and other components involved in pricing. For the crude oil cost estimate component, we reviewed the Office of Management and Budget’s methodology for estimating crude oil prices. We discussed the Office of Management and Budget’s methodology with the analyst that prepares the forecasted crude oil prices. We also reviewed the Office of Management and Budget’s use of West Texas Intermediate crude oil futures prices and the historical relationships between those prices and domestic, imported, and composite crude oil prices in making crude oil price forecasts. We concluded that this approach was reasonable. For the cost to refine component, we reviewed the Defense Energy Support Center’s methodology for calculating refined costs. In assessing the Defense Energy Support Center’s methodology, we relied on our previous analysis of its regression equation and a suggested change that was adopted. This same methodology was being used as of May 2002 and remains reasonable. For the third component of fuel pricing—adjustments—we discussed and examined Office of the Under Secretary of Defense (Comptroller) documents related to stabilized annual fuel prices and applicable Program Budget Decisions to determine what costs were included in the component. To determine criteria, we reviewed the applicable portions of DOD’s Financial Management Regulation and the legislative history pertaining to the creation of revolving funds since 1949. To identify any fuel-related cash movements into or out of the working capital fund that occurred and might have affected adjustments, we interviewed various DOD officials and obtained and reviewed the applicable appropriations acts and the committee and conference reports on those acts. We analyzed the results, developed a methodology for determining the effect, and discussed our conclusions with various DOD program and budget officials. Finally, for the fourth component of fuel pricing—surcharges—we obtained, reviewed and discussed DLA and Defense Energy Support Center methodologies and documentation used in computing the estimated and actual surcharge costs. To identify criteria for what surcharge costs should include, we obtained and reviewed DOD’s Financial Management Regulation and any other policies and procedures governing or affecting fuel pricing. To determine whether the support for the surcharge costs was adequate, we requested, reviewed, and analyzed pertinent documentation and records supporting budgeted and actual obligations for each surcharge element for fiscal years 1993-2003. However, officials were unable to provide support for estimated surcharge costs from fiscal years 1993-1996 and were unable to provide support for several actual costs for fiscal years 1993 and 1994. We met with and/or contacted various program and budget officials within the Office of the Secretary of Defense; Office of Management and Budget; DLA Headquarters; Defense Energy Support Center; and the various military services. We performed our work from June 2001 to April 2002 in accordance with generally accepted government auditing standards. As part of our review, we examined DOD’s Financial Management Regulation to ensure that it incorporated the Statement of Federal Financial Accounting Standards (SFFAS) No. 4 “Managerial Cost Accounting Standards” (Feb. 28, 1997). We did not independently verify DOD’s financial information used in this report. Prior GAO and Department of Defense Inspector General audit reports and Federal Manager’s Financial Integrity Act reports have identified inadequacies in the fund’s accounting and reporting. As discussed in our report on the results of our review of the fiscal year 2001 Financial Report of the U.S. Government, DOD’s financial management deficiencies, taken together, continue to represent the single largest obstacle to achieving an unqualified opinion on the U.S. government’s consolidated financial statements. In addition to those named above, Bob Coleman, Jane Hunt, Patricia Lentini, Charles Perdue, Greg Pugnetti, Chris Rice, Gina Ruidera, Malvern Saavedra, and John Van Schaik made key contributions to this report. | The Department of Defense (DOD) Defense Working Capital Fund was used to buy $70 billion in commodities in fiscal year 2001. This amount is estimated to grow to $75 billion for fiscal year 2003. The department's financial management regulation states that fund activities will operate in a business-like fashion and incorporate full costs in determining the pricing of their products. The National Defense Authorization Act for Fiscal year 2001 requires that GAO review the working capital fund activities to identify any potential changes in current management processes or policies that would result in a more efficient and economical operation. The act also requires that GAO review the Defense Logistics Agency's (DLA) efficiency, effectiveness, and flexibility of operational practices and identify ways to improve services. One such DLA activity, the Defense Energy Support Center, sold $4.7 billion of various petroleum-related products to the military services in fiscal year 2001. DOD's fuel prices have not reflected the full cost of fuel as envisioned in the working capital fund concept because cash movements to the fund balance and surcharge inaccuracies have affected the stabilized annual fuel prices. Over $4 billion was moved into and out of the working capital fund from fiscal year 1993 to 2002. These adjustments affected the extent to which subsequent years' prices reflected the full cost of fuel. In addition, the surcharges did not accurately account for fuel-related costs as required by DOD's Financial Management Regulation. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
From May 2003 through June 2004, the CPA, led by the United States and the United Kingdom, was the UN-recognized coalition authority responsible for the temporary governance of Iraq and for overseeing, directing, and coordinating the reconstruction effort. In May 2003, the CPA dissolved the military organizations of the former regime and began the process of creating or reestablishing new Iraqi security forces, including the police and a new Iraqi army. Over time, multinational force commanders assumed responsibility for recruiting and training some Iraqi defense and police forces in their areas of responsibility. In May 2004, the President issued a National Security Presidential Directive, which stated that, after the transition of power to the Iraqi government, the Department of State (State), through its ambassador to Iraq, would be responsible for all U.S. activities in Iraq except for security and military operations. U.S. activities relating to security and military operations would be the responsibility of the Department of Defense (DOD). The Presidential Directive required the U.S. Central Command (CENTCOM) to direct all U.S. government efforts to organize, equip, and train Iraqi security forces. The Multi-National Security Transition Command-Iraq, which operates under Multi-National Force-Iraq (MNF-I), now leads coalition efforts to train, equip, and organize Iraqi security forces. Other U.S. government agencies also play significant roles in the reconstruction effort. The U.S. Agency for International Development (USAID) is responsible for projects to restore Iraq’s infrastructure, support healthcare and education initiatives, expand economic opportunities for Iraqis, and foster improved governance. The U.S. Army Corps of Engineers provides engineering and technical services to USAID, State, and military forces in Iraq. In December 2005, the responsibilities of the Project Contracting Office (PCO), a temporary organization responsible for program, project, asset, and financial management of construction and nonconstruction activities, were merged with those of the U.S. Army Corps of Engineers Gulf Region Division. On June 28, 2004, the CPA transferred power to an interim sovereign Iraqi government, the CPA was officially dissolved, and Iraq’s transitional period began. Under Iraq’s transitional law, the transitional period included the completion of a draft constitution in October 2005 and two subsequent elections—a referendum on the constitution and an election for a permanent government. The Iraqi people approved the constitution on October 15, 2005, and voted for representatives to the Iraq Council of Representatives on December 15, 2005. As of February 3, 2006, the Independent Electoral Commission of Iraq had not certified the election results for representatives. Once certified, the representatives are to form a permanent government. According to U.S. officials and Iraqi constitutional experts, the new Iraqi government is likely to confront the same issues it confronted prior to the referendum—the power of the central government, control of Iraq’s natural resources, and the application of Islamic law. According to U.S. officials, once the Iraqi legislature commences work, it will form a committee that has 4 months to recommend amendments to the constitution. To take effect, these proposed amendments must be approved by the Iraqi legislature and then Iraqi citizens must vote on them in a referendum within 2 months. The United States faces three key challenges in stabilizing and rebuilding Iraq. First, the unstable security environment and the continuing strength of the insurgency have made it difficult for the United States to transfer security responsibilities to Iraqi forces and to engage in rebuilding efforts. Second, inadequate performance data and measures make it difficult to determine the overall progress and impact of U.S. reconstruction efforts. Third, the U.S. reconstruction program has encountered difficulties with Iraq’s inability to sustain new and rehabilitated infrastructure projects and to address maintenance needs in the water, sanitation, and electricity sectors. U.S. agencies are working to develop better performance data and plans for sustaining rehabilitated infrastructure. Over the past 2½ years, significant increases in attacks against the coalition and coalition partners have made it difficult to transfer security responsibilities to Iraqi forces and to engage in rebuilding efforts in Iraq. The insurgency in Iraq intensified through October 2005 and has remained strong since then. Poor security conditions have delayed the transfer of security responsibilities to Iraqi forces and the drawdown of U.S. forces in Iraq. The unstable security environment has also affected the cost and schedule of rebuilding efforts and has led, in part, to project delays and increased costs for security services. Recently, the administration has taken actions to integrate military and civilian rebuilding and stabilization efforts. The insurgency intensified through October 2005 and has remained strong since then. As we reported in March 2005, the insurgency in Iraq— particularly the Sunni insurgency—grew in complexity, intensity, and lethality from June 2003 through early 2005. According to a February 2006 testimony by the Director of National Intelligence, insurgents are using increasingly lethal improvised explosive devices and continue to adapt to coalition countermeasures. As shown in figure 1, enemy-initiated attacks against the coalition, its Iraqi partners, and infrastructure increased in number over time. The highest peak occurred during October 2005, around the time of Ramadan and the October referendum on Iraq’s constitution. This followed earlier peaks in August and November 2004 and January 2005. According to a senior U.S. military officer, attack levels ebb and flow as the various insurgent groups—almost all of which are an intrinsic part of Iraq’s population— rearm and attack again. As the administration has reported, insurgents share the goal of expelling the coalition from Iraq and destabilizing the Iraqi government to pursue their individual and, at times, conflicting goals. Iraqi Sunnis make up the largest portion of the insurgency and present the most significant threat to stability in Iraq. In February 2006, the Director of National Intelligence reported that the Iraqi Sunnis’ disaffection is likely to remain high in 2006, even if a broad, inclusive national government emerges. These insurgents continue to demonstrate the ability to recruit, supply, and attack coalition and Iraqi security forces. Their leaders continue to exploit Islamic themes, nationalism, and personal grievances to fuel opposition to the government and recruit more fighters. According to the Director, the most extreme Sunni jihadists, such as al-Qaeda in Iraq, will remain unreconciled and continue to attack Iraqi and coalition forces. The remainder of the insurgency consists of radical Shia groups, some of whom are supported by Iran, violent extremists, criminals, and, to a lesser degree, foreign fighters. According to the Director of National Intelligence, Iran provides guidance and training to select Iraqi Shia political groups and weapons and training to Shia militant groups to enable anticoalition attacks. Iran also has contributed to the increasing lethality of anticoalition attacks by enabling Shia militants to build improvised explosive devices with explosively formed projectiles, similar to those developed by Iran and Lebanese Hizballah. The continuing strength of the insurgency has made it difficult for the multinational force to develop effective and loyal Iraqi security forces, transfer security responsibilities to them, and progressively draw down U.S. forces in Iraq. The Secretary of Defense and MNF-I recently reported progress in developing Iraqi security forces, saying that these forces continue to grow in number, take on more responsibilities, and increase their lead in counterinsurgency operations in some parts of Iraq. For example, in December 2005 and January 2006, MNF-I reported that Iraqi army battalions and brigades had assumed control of battle space in parts of Ninewa, Qadisiyah, Babil, and Wasit provinces. According to the Director for National Intelligence, Iraqi security forces are taking on more- demanding missions, making incremental progress toward operational independence, and becoming more capable of providing security. In the meantime, coalition forces continue to support and assist the majority of Iraqi security forces as they develop the capability to operate independently. However, recent reports have recognized limitations in the effectiveness of Iraqi security forces. For example, DOD’s October 2005 report notes that Iraqi forces will not be able to operate independently for some time because they need logistical capabilities, ministry capacity, and command and control and intelligence structures. In the November 2005 National Strategy for Victory in Iraq, the administration cited a number of challenges to developing effective Iraqi security forces, including the need to guard against infiltration by elements whose first loyalties are to institutions other than the Iraqi government and to address the militias and armed groups that are outside the formal security sector and government control. Moreover, according to the Director of National Intelligence’s February 2006 report, Iraqi security forces are experiencing difficulty in managing ethnic and sectarian divisions among their units and personnel. GAO’s classified report on Iraq’s security situation provided further information and analysis on the challenges to developing Iraqi security forces and the conditions for the phased drawdown of U.S. and other coalition forces. The security situation in Iraq has affected the cost and schedule of reconstruction efforts. Security conditions have, in part, led to project delays and increased costs for security services. Although it is difficult to quantify the costs and delays resulting from poor security conditions, both agency and contractor officials acknowledged that security costs have diverted a considerable amount of reconstruction resources and have led to canceling or reducing the scope of some reconstruction projects. For example, in March 2005, USAID cancelled two task orders related to power generation that totaled nearly $15 million to help pay for the increased security costs incurred at another power generation project in southern Baghdad. In another example, work was suspended at a sewer repair project in central Iraq for 4 months in 2004 due to security concerns. In January 2006, State reported that direct and indirect security costs represent 16 to 22 percent of the overall cost of major infrastructure reconstruction projects. In addition, the security environment in Iraq has led to severe restrictions on the movement of civilian staff around the country and reductions of a U.S. presence at reconstruction sites, according to U.S. agency officials and contractors. For example, the Project Contracting Office reported in February 2006, the number of attacks on convoys and casualties had increased from 20 convoys attacked and 11 casualties in October 2005 to 33 convoys attacked and 34 casualties in January 2006. In another example, work at a wastewater plant in central Iraq was halted for approximately 2 months in early 2005 because insurgent threats drove away subcontractors and made the work too hazardous to perform. In the assistance provided to support the electoral process, U.S.-funded grantees and contractors also faced security restrictions that hampered their movements and limited the scope of their work. For example, IFES was not able to send its advisors to most of the governorate-level elections administration offices, which hampered training and operations at those facilities leading up to Iraq’s Election Day on January 30, 2005. While poor security conditions have slowed reconstruction and increased costs, a variety of management challenges also have adversely affected the implementation of the U.S. reconstruction program. In September 2005, we reported that management challenges such as low initial cost estimates and delays in funding and awarding task orders have led to the reduced scope of the water and sanitation program and delays in starting projects. In addition, U.S. agency and contractor officials have cited difficulties in initially defining project scope, schedule, and cost, as well as concerns with project execution, as further impeding progress and increasing program costs. These difficulties include lack of agreement among U.S. agencies, contractors, and Iraqi authorities; high staff turnover; an inflationary environment that makes it difficult to submit accurate pricing; unanticipated project site conditions; and uncertain ownership of project sites. Our ongoing work on Iraq’s energy sectors and the management of design- build contracts will provide additional information on the issues that have affected the pace and costs of reconstruction. The Administration has taken steps to develop a more comprehensive, integrated approach to combating the insurgency and stabilizing Iraq. The National Strategy for Victory in Iraq lays out an integrated political, military, and economic strategy that goes beyond offensive military operations and the development of Iraqi security forces in combating the insurgency. Specifically, it calls for cooperation with and support for local governmental institutions, the prompt dispersal of aid for quick and visible reconstruction, and central government authorities who pay attention to local needs. Toward that end, U.S. agencies are developing tools for integrating political, economic, and security activities in the field. For example, USAID is developing the Focused Stabilization Strategic City Initiative that will fund social and economic stabilization activities in communities within 10 strategic cities. The program is intended to jump-start the development of effective local government service delivery by directing local energies from insurgency activities toward productive economic and social opportunities. The U.S. embassy in Baghdad and MNF-I are also developing provincial assistance teams as a component of an integrated counterinsurgency strategy. These teams would consist of coalition military and civilian personnel who would assist Iraq’s provincial governments with (1) developing a transparent and sustained capability to govern; (2) promoting increased security, rule of law, and political and economic development; and (3) providing the provincial administration necessary to meet the basic needs of the population. It is unclear whether these two efforts will become fully operational, as program documents have noted problems in providing funding and security for them. State has set broad goals for providing essential services, and the U.S. program has undertaken many rebuilding activities in Iraq. The U.S. program has made some progress in accomplishing rebuilding activities, such as rehabilitating some oil facilities to restart Iraq’s oil production, increasing electrical generation capacity, restoring some water treatment plants, and building Iraqi health clinics. However, limited performance data and measures make it difficult to determine and report on the progress and impact of U.S. reconstruction. Although information is difficult to obtain in an unstable security environment, State reported that it is currently finalizing a set of metrics to track the impact of reconstruction efforts. In the water and sanitation sector, the Department of State has primarily reported on the numbers of projects completed and the expected capacity of reconstructed treatment plants. However, we found that the data are incomplete and do not provide information on the scope and cost of individual projects nor do they indicate how much clean water is reaching intended users as a result of these projects. Moreover, reporting only the number of projects completed or under way provides little information on how U.S. efforts are improving the amount and quality of water reaching Iraqi households or their access to sanitation services. Information on access to water and its quality is difficult to obtain without adequate security or water-metering facilities. Limitations in health sector measurements also make it difficult to relate the progress of U.S. activities to its overall effort to improve the quality and access of health care in Iraq. Department of State measurements of progress in the health sector primarily track the number of completed facilities, an indicator of increased access to health care. However, the data available do not indicate the adequacy of equipment levels, staffing levels, or quality of care provided to the Iraqi population. Monitoring the staffing, training, and equipment levels at health facilities may help gauge the effectiveness of the U.S. reconstruction program and its impact on the Iraqi people. In the electricity sector, U.S. agencies have primarily reported on generation measures such as levels of added or restored generation capacity and daily power generation of electricity; numbers of projects completed; and average daily hours of power. However, these data do not show whether (1) the power generated is uninterrupted for the period specified (e.g., average number of hours per day); (2) there are regional or geographic differences in the quantity of power generated; and (3) how much power is reaching intended users. Information on the distribution and access of electricity is difficult to obtain without adequate security or accurate metering capabilities. Opinion surveys and additional outcome measures have the potential to gauge the impact of the U.S. reconstruction efforts on the lives of Iraqi people and their satisfaction with these sectors. A USAID survey in 2005 found that the Iraqi people were generally unhappy with the quality of their water supply, waste disposal, and electricity services but approved of the primary health care services they received. In September 2005, we recommended that the Secretary of State address this issue of measuring progress and impact in the water and sanitation sector. State agreed with our recommendation and stated in January 2006 that it is currently finalizing a set of standard methodologies and metrics for water and other sectors that could be used to track the impact of U.S. reconstruction efforts. The U.S. reconstruction program has encountered difficulties with Iraq’s ability to sustain the new and rehabilitated infrastructure and address maintenance needs. In the water, sanitation, and electricity sectors, in particular, some projects have been completed but have sustained damage or become inoperable due to Iraq’s problems in maintaining or properly operating them. State reported in January 2006 that several efforts were under way to improve Iraq’s ability to sustain the infrastructure rebuilt by the United States. In the water and sanitation sector, U.S. agencies have identified limitations in Iraq’s capacity to maintain and operate reconstructed facilities, including problems with staffing, unreliable power to run treatment plants, insufficient spare parts, and poor operations and maintenance procedures. The U.S. embassy in Baghdad stated that it was moving from the previous model of building and turning over projects to Iraqi management toward a “build-train-turnover” system to protect the U.S. investment. However, these efforts are just beginning, and it is unclear whether the Iraqis will be able to maintain and operate completed projects and the more than $1 billion in additional large-scale water and sanitation projects expected to be completed through 2008. In September 2005, we recommended that the Secretary of State address the issue of sustainability in the water and sanitation sector. State agreed with our recommendation and stated that it is currently working with the Iraqi government to assess the additional resources needed to operate and maintain water and sanitation facilities that have been constructed or repaired by the United States. In the electricity sector, the Iraqis’ capacity to operate and maintain the power plant infrastructure and equipment provided by the United States remains a challenge at both the plant and ministry levels. As a result, the infrastructure and equipment remain at risk of damage following their transfer to the Iraqis. In our interviews with Iraqi power plant officials from 13 locations throughout Iraq, the officials stated that their training did not adequately prepare them to operate and maintain the new U.S.- provided gas turbine engines. Due to limited access to natural gas, some Iraqi power plants are using low-grade oil to fuel their natural gas combustion engines. The use of oil-based fuels, without adequate equipment modification and fuel treatment, decreases the power output of the turbines by up to 50 percent, requires three times more maintenance, and could result in equipment failure and damage that significantly reduces the life of the equipment, according to U.S. and Iraqi power plant officials. U.S. officials have acknowledged that more needs to be done to train plant operators and ensure that advisory services are provided after the turnover date. In January 2006, State reported that it has developed a strategy with the Ministry of Electricity to focus on rehabilitation and sustainment of electricity assets. Although agencies have incorporated some training programs and the development of operations and maintenance capacity into individual projects, problems with the turnover of completed projects, such as those in the water and sanitation and electricity sectors, have led to a greater interagency focus on improving project sustainability and building ministry capacity. In May 2005, an interagency working group including State, USAID, PCO, and the Army Corps of Engineers was formed to identify ways to address Iraq’s capacity-development needs. The working group reported that a number of critical infrastructure facilities constructed or rehabilitated under U.S. funding have failed, will fail, or will operate in suboptimized conditions following handover to the Iraqis. To mitigate the potential for project failures, the working group recommended increasing the period of operational support for constructed facilities from 90 days to up to 1 year. In January 2006, State reported that it has several efforts under way focused on improving Iraq’s ability to operate and maintain facilities over time. As part of our ongoing review of Iraq’s energy sector, we will be assessing the extent to which the administration is providing funds to sustain the infrastructure facilities constructed or rehabilitated by the United States. As the new Iraqi government forms, it must plan to secure the financial resources it will need to continue the reconstruction and stabilization efforts begun by the United States and international community. Initial assessments in 2003 identified $56 billion in reconstruction needs across a variety of sectors in Iraq. However, Iraq’s needs are greater than originally anticipated due to severely degraded infrastructure, post-conflict looting and sabotage, and additional security costs. The United States has borne the primary financial responsibility for rebuilding and stabilizing Iraq; however, its commitments are largely obligated and remaining commitments and future contributions are not finalized. Further, U.S. appropriations were never intended to meet all Iraqi needs. International donors have provided a lesser amount of funding for reconstruction and development activities; however, most of the pledged amount is in the form of loans that Iraq has just begun to access. Finally, Iraq’s ability to contribute financially to its additional rebuilding and stabilization needs is dependent upon the new government’s efforts to increase revenues obtained from crude oil exports, reduce energy and food subsidies, control government operating expenses, provide for a growing security force, and repay external debt and war reparations. Initial assessments of Iraq’s needs through 2007 by the U.N., World Bank, and the CPA estimated that the reconstruction of Iraq would require about $56 billion. The October 2003 joint UN/World Bank assessment identified $36 billion, from 2004 through 2007, in immediate and medium-term needs in 14 priority sectors, including education, health, electricity, transportation, agriculture, and cross-cutting areas such as human rights and the environment. For example, the assessment estimated that Iraq would need about $12 billion for rehabilitation and reconstruction, new investment, technical assistance, and security in the electricity sector. In addition, the assessment noted that the CPA estimated an additional $20 billion would be needed from 2004 through 2007 to rebuild other critical sectors such as security and oil. Iraq may need more funding than currently available to meet the demands of the country. The state of some Iraqi infrastructure was more severely degraded than U.S. officials originally anticipated or initial assessments indicated. The condition of the infrastructure was further exacerbated by post-2003 conflict looting and sabotage. For example, some electrical facilities and transmission lines were damaged, and equipment and materials needed to operate treatment and sewerage facilities were destroyed by the looting that followed the 2003 conflict. In addition, insurgents continue to target electrical transmission lines and towers as well as oil pipelines that provide needed fuel for electrical generation. In the oil sector, a June 2003 U.S. government assessment found that more than $900 million would be needed to replace looted equipment at Iraqi oil facilities. These initial assessments assumed reconstruction would take place in a peace-time environment and did not include additional security costs. Further, these initial assessments assumed that Iraqi government revenues and private sector financing would increasingly cover long-term reconstruction requirements. This was based on the assumption that the rate of growth in oil production and total Iraqi revenues would increase over the next several years. However, private sector financing and government revenues may not yet meet these needs. According to a January 2006 International Monetary Fund (IMF) report, private sector investment will account for 8 percent of total projected investment for 2006, down from 12 percent in 2005. In the oil sector alone, Iraq will likely need an estimated $30 billion over the next several years to reach and sustain an oil production capacity of 5 million barrels per day, according to industry experts and U.S. officials. For the electricity sector, Iraq projects that it will need $20 billion through 2010 to boost electrical capacity, according to the Department of Energy’s Energy Information Administration. The United States is the primary contributor to rebuilding and stabilization efforts in Iraq. Since 2003, the United States has made available about $30 billion for activities that have largely focused on infrastructure repair and training of Iraqi security forces. As priorities changed, the United States reallocated about $5 billion of the $18.4 billion fiscal year 2004 emergency supplemental among the various sectors, over time increasing security and justice funds while decreasing resources for the water and electricity sectors. As of January 2006, of the $30 billion appropriated, about $23 billion had been obligated and about $16 billion had been disbursed for activities that included infrastructure repair, training, and equipping of the security and law enforcement sector; infrastructure repair of the electricity, oil, and water and sanitation sectors; and CPA and U.S. administrative expenses. These appropriations were not intended to meet all of Iraq’s needs. The United States has obligated nearly 80 percent of its available funds. Although remaining commitments and future contributions have not been finalized, they are likely to target activities for building ministerial capacity, sustaining existing infrastructure investments, and training and equipping the Iraqi security forces, based on agency reporting. For example, in January 2006, State reported a new initiative to address Iraqi ministerial capacity development at 12 national ministries. According to State, Embassy Baghdad plans to undertake a comprehensive approach to provide training in modern techniques of civil service policies, requirements-based budget processes, information technology standards, and logistics management systems to Iraqi officials in key ministries. International donors have provided a lesser amount of funding for reconstruction and development activities. According to State, donors have provided about $2.7 billion in multilateral and bilateral grants—of the pledged $13.6 billion—as of December 2005. About $1.3 billion has been deposited by donors into the two trust funds of the International Reconstruction Fund Facility for Iraq (IRFFI), of which about $900 million had been obligated and about $400 million disbursed to individual projects, as of December 2005. Donors also have provided bilateral assistance for Iraq reconstruction activities; however, complete information on this assistance is not readily available. Most of the pledged amount is in the form of loans that the Iraqis have recently begun to access. About $10 billion, or 70 percent, of the $13.6 billion pledged in support of Iraq reconstruction is in the form of loans, primarily from the World Bank, the IMF, and Japan. In September 2004, the IMF provided a $436 million emergency post-conflict assistance loan to facilitate Iraqi debt relief, and in December 2005, Iraq secured a $685 million Stand-By Arrangement (SBA) with the IMF. On November 29, 2005, the World Bank approved a $100 million loan within a $500 million program for concessional international development assistance. Iraq’s fiscal ability to contribute to its own rebuilding is constrained by the amount of revenues obtained from crude oil exports, continuing subsidies for food and energy, growing costs for government salaries and pensions, increased demands for an expanding security force, and war reparations and external debt. Crude oil exports account for nearly 90 percent of the Iraqi government revenues in 2006, according to the IMF. Largely supporting Iraq’s government operations and subsidies, crude oil export revenues are dependent upon export levels and market price. The Iraqi 2006 budget has projected that Iraq’s crude oil export revenues will grow at an annual growth rate of 17 percent per year (based on an average production level of 2 million bpd in 2005 to 3.6 million bpd in 2010), estimating an average market price of about $46 per barrel. Oil exports are projected to increase from 1.4 million bpd in 2005 to 1.7 million bpd in 2006, according to the IMF. Iraq’s current crude oil export capacity is theoretically as high as 2.5 million bpd, according to the Energy Information Administration at the Department of Energy. However, Iraq’s crude oil export levels have averaged 1.4 million bpd as of December 2005, in part due to attacks on the energy infrastructure and pipelines. In January 2006, crude oil export levels fell to an average of about 1.1 million bpd. Further, a combination of insurgent attacks on crude oil and product pipelines, dilapidated infrastructure, and poor operations and maintenance have hindered domestic refining and have required Iraq to import significant portions of liquefied petroleum gas, gasoline, kerosene, and diesel. According to State, the Iraqi Oil Ministry estimates that the current average import cost of fuels is roughly $500 million each month. Current government subsidies constrain opportunities for growth and investment and have kept prices for food, oil, and electricity low. Before the war, at least 60 percent of Iraqis depended on monthly rations—known as the public distribution system (PDS)—provided by the UN Oil for Food program to meet household needs. The PDS continues to provide food subsidies to Iraqis. In addition, Iraqis pay below-market prices for refined fuels and, in the absence of effective meters, for electricity and water. Low prices have encouraged over-consumption and have fueled smuggling to neighboring countries. Food and energy subsidies account for about 18 percent of Iraq’s projected gross domestic product (GDP) for 2006. As part of its Stand-By Arrangement with the IMF, Iraq plans to reduce the government subsidy of petroleum products, which would free up oil revenues to fund additional needs and reduce smuggling. According to the IMF, by the end of 2006, the Iraqi government plans to complete a series of adjustments to bring fuel prices closer to those of other Gulf countries. However, it is unclear whether the Iraqi government will have the political commitment to continue to raise fuel prices. Generous wage and pension benefits have added to budgetary pressures. Partly due to increases in these benefits, the Iraqi government’s operating expenditures are projected to increase by over 24 percent from 2005 to 2006, according to the IMF. As a result, wages and pensions constitute about 21 percent of projected GDP for 2006. The IMF noted that it is important for the government to keep non-defense wages and pensions under firm control to contain the growth of civil service wages. As a first step, the Iraqi government plans to complete a census of all public service employees by June 2006. Iraq plans to spend more resources on its own defense. Iraq’s security- related spending is currently projected to be about $5.3 billion in 2006, growing from 7 to about 13 percent of projected GDP. The amount reflects rising costs of security and the transfer of security responsibilities from the United States to Iraq. The Iraqi government also owes over $84 billion to victims of its invasion of Kuwait and international creditors. As of December 2005, Iraq owed about $33 billion in unpaid awards resulting from its invasion and occupation of Kuwait. As directed by the UN, Iraq currently deposits 5 percent of its oil proceeds into a UN compensation fund. Final payment of these awards could extend through 2020 depending on the growth of Iraq’s oil proceeds. In addition, the IMF estimated that Iraq’s external debt was about $51 billion at the end of 2005. For the past 2½ years, the United States has provided $30 billion with the intent of developing capable Iraqi security forces, rebuilding a looted and worn infrastructure, and supporting democratic elections. However, the United States has confronted a lethal insurgency that has taken many lives and made rebuilding Iraq a costly and challenging endeavor. It is unclear when Iraqi security forces will be able to operate independently, thereby enabling the United States to reduce its military presence. Similarly, it is unclear how U.S. efforts are helping Iraq obtain clean water, reliable electricity, or competent health care. Measuring the outcomes of U.S. efforts is important to ensure that the U.S. dollars spent are making a difference in the daily lives of the Iraqi people. In addition, the United States must ensure that the billions of dollars it has already invested in Iraq’s infrastructure are not wasted. The Iraqis need additional training and preparation to operate and maintain the power plants, water and sewage treatment facilities, and health care centers the United States has rebuilt or restored. In response to our reports, State has begun to develop metrics for measuring progress and plans for sustaining the U.S.-built infrastructure. The administration’s next budget will reveal its level of commitment to these challenges. But the challenges are not exclusively those of the United States. The Iraqis face the challenge of forming a government that has the support of all ethnic and religious groups. They also face the challenge of addressing those constitutional issues left unresolved from the October referendum— power of the central government, control of Iraq’s natural resources, and the application of Islamic law. The new government also faces the equally difficult challenges of reducing subsidies, controlling public salaries and pensions, and sustaining the growing number of security forces. This will not be easy, but it is necessary for the Iraqi government to begin to contribute to its own rebuilding and stabilization efforts and to encourage investment by the international community and private sector. We continue to review U.S. efforts to train and equip Iraqi security forces, develop the oil and electricity sectors, reduce corruption, and enhance the capacity of Iraqi ministries. Specifically, we will examine efforts to stabilize Iraq and develop its security forces, including the challenge of ensuring that Iraq can independently fund, sustain, and support its new security forces; assess issues related to the development of Iraq’s energy sector, including the sectors’ needs as well as challenges such as corruption; and examine capacity-building efforts in the Iraqi ministries. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions you or the other Committee members may have. For further information, please contact Joseph A. Christoff on (202) 512- 8979. Individuals who made key contributions to this testimony were Monica Brym, Lynn Cothern, Bruce Kutnick, Steve Lord, Sarah Lynch, Judy McCloskey, Micah McMillan, Tet Miyabara, Jose Pena III, Audrey Solis, and Alper Tunca. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The United States, along with coalition partners and various international organizations, has undertaken a challenging and costly effort to stabilize and rebuild Iraq following multiple wars and decades of neglect by the former regime. This enormous effort is taking place in an unstable security environment, concurrent with Iraqi efforts to transition to its first permanent government. The United States' goal is to help the Iraqi government develop a democratic, stable, and prosperous country, at peace with itself and its neighbors, a partner in the war against terrorism, enjoying the benefits of a free society and a market economy. In this testimony, GAO discusses the challenges (1) that the United States faces in its rebuilding and stabilization efforts and (2) that the Iraqi government faces in financing future requirements. This statement is based on four reports GAO has issued to the Congress since July 2005 and recent trips to Iraq. Since July 2005, we have issued reports on (1) the status of funding and reconstruction efforts in Iraq, focusing on the progress achieved and challenges faced in rebuilding Iraq's infrastructure; (2) U.S. reconstruction efforts in the water and sanitation sector; (3) U.S. assistance for the January 2005 Iraqi elections; and (4) U.S. efforts to stabilize the security situation in Iraq (a classified report). The United States faces three key challenges in rebuilding and stabilizing Iraq. First, the security environment and the continuing strength of the insurgency have made it difficult for the United States to transfer security responsibilities to Iraqi forces and progressively draw down U.S. forces. The security situation in Iraq has deteriorated since June 2003, with significant increases in attacks against Iraqi and coalition forces. In addition, the security situation has affected the cost and schedule of rebuilding efforts. The State Department has reported that security costs represent 16 to 22 percent of the overall costs of major infrastructure projects. Second, inadequate performance data and measures make it difficult to determine the overall progress and impact of U.S. reconstruction efforts. The United States has set broad goals for providing essential services in Iraq, but limited performance measures present challenges in determining the overall impact of U.S. projects. Third, the U.S. reconstruction program has encountered difficulties with Iraq's inability to sustain new and rehabilitated infrastructure projects and to address basic maintenance needs in the water, sanitation, and electricity sectors. U.S. agencies are working to develop better performance data and plans for sustaining rehabilitated infrastructure. As the new Iraqi government forms, it must plan to secure the financial resources it will need to continue the reconstruction and stabilization efforts begun by the United States and international community. Iraq will likely need more than the $56 billion that the World Bank, United Nations, and CPA estimated it would require for reconstruction and stabilization efforts from 2004 to 2007. More severely degraded infrastructure, post-2003 conflict looting and sabotage, and additional security costs have added to the country's basic reconstruction needs. However, it is unclear how Iraq will finance these additional requirements. While the United States has borne the primary financial responsibility for rebuilding and stabilizing Iraq, its commitments are largely obligated and future commitments are not finalized. Further, U.S. appropriations were never intended to meet all Iraqi needs. In addition, international donors have mostly committed loans that the government of Iraq is just beginning to tap. Iraq's ability to financially contribute to its own rebuilding and stabilization efforts will depend on the new government's efforts to increase revenues obtained from crude oil exports, reduce energy and food subsidies, control government operating expenses, provide for a growing security force, and repay $84 billion in external debt and war reparations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Mr. Chairman and Members of the Caucus: I am pleased to be here today to discuss the serious and continuing threat of corruption to Immigration and Naturalization Service (INS) and U.S. Customs Service employees along the Southwest Border by persons involved in the illegal drug trade. The enormous sums of money being generated by drug trafficking have increased the threat for bribery. It is a challenge that INS, Customs, and other law enforcement agencies must overcome at the border. My testimony focuses on (1) the extent to which INS and Customs have and comply with policies and procedures for ensuring employee integrity; (2) an identification and comparison of the Departments of Justice’s and the Treasury’s organizational structures, policies, and procedures for handling allegations of drug-related employee misconduct and whether the policies and procedures are followed; (3) an identification of the types of illegal drug-related activities in which INS and Customs employees on the Southwest Border have been convicted; and (4) the extent to which lessons learned from corruption cases closed in fiscal years 1992 through 1997 have led to changes in policies and procedures for preventing the drug-related corruption of INS and Customs employees. This statement is based on our March 30, 1999, report on drug-related employee corruption. Our statement makes the following points: INS’ and Customs’ compliance with their integrity procedures varied. Justice’s Office of the Inspector General (OIG) and INS generally complied with investigative procedures, but Customs’ compliance was uncertain. Opportunities to learn lessons from closed corruption cases have been missed. across the Southwest Border. At the ports of entry, about 1,300 INS and 2,000 Customs inspectors are to check incoming traffic to identify both persons and contraband that are not allowed to enter the country. Between the ports of entry and along thoroughfares in border areas, about 6,300 INS Border Patrol agents are to detect and prevent the illegal entry of persons and contraband. The corruption of INS or Customs employees is not a new phenomenon, and the 1990s have seen congressional emphasis on ensuring employee integrity and preventing corruption. A corrupt INS or Customs employee at or between the ports of entry can help facilitate the safe passage of illegal drug shipments. The integrity policies and procedures adopted by INS and Customs are designed to ensure that their employees, especially those in positions that could affect the smuggling of illegal drugs into the United States, are of acceptable integrity and, failing that, to detect any corruption as quickly as possible. INS and Customs follow Office of Personnel Management (OPM) regulations, which require background investigations to be completed for new hires by the end of their first year on the job. Generally, the background investigations included a credit check, criminal record check, contact with prior employers and personal references, and an interview with the employee. Our review found that background investigations for over 99 percent of the immigration inspectors, Border Patrol agents, and Customs inspectors hired during the first half of fiscal year 1997 were completed by the end of their first year on the job. 1995 through 1997. In some instances, reinvestigations were as many as 3 years overdue. To the extent that a reinvestigation constitutes an important periodic check on an employee’s continuing suitability for employment in a position where he or she may be exposed to bribery or other types of corruption, the continuing reinvestigation backlogs at both agencies leave them more vulnerable to potential employee corruption. As of March 1998, INS had not yet completed 513 overdue reinvestigations of immigration inspectors and Border Patrol agents. Customs had a backlog of 421 overdue reinvestigations. Newly hired immigration inspectors, Border Patrol agents, and Customs inspectors are required to attend basic training. As part of their basic training, new employees are to receive training courses on integrity concepts and expected behavior, including ethical concepts and values, ethical dilemmas and decisionmaking, and employee conduct expectations. This integrity training provides the only required integrity training for all immigration inspectors, Border Patrol agents, and Customs inspectors. For Border Patrol agents, 7 of 744 basic training hours are to be devoted to integrity training. For Customs inspectors, 8 of 440 basic training hours are to be devoted to integrity training. INS immigration inspectors are to receive integrity training as part of their basic training, but it is interspersed with other training rather than provided as a separate course. Therefore, we could not determine how many hours are to be devoted specifically to integrity training. We selected random samples of 100 immigration inspectors, 101 Border Patrol agents, and 100 Customs inspectors to determine whether they received integrity training as part of their basic training. Agency records we reviewed showed that 95 of 100 immigration inspectors, all 101 Border Patrol agents, and 88 of 100 Customs inspectors had received basic training. According to INS and Customs officials, the remaining employees likely received basic training, but it was not documented in their records. Justice OIG, INS, and Customs officials advocated advanced integrity training for their employees to reinforce the integrity concepts presented during basic training. The Justice OIG, INS’ Office of Internal Audit, and Customs provide advanced integrity training for INS and Customs employees. While this advanced training has been available to immigration inspectors, Border Patrol agents, and Customs inspectors, they were not required to take it or any additional integrity training beyond what they received in basic training. Consequently, some immigration inspectors, Border Patrol agents, and Customs inspectors assigned to the Southwest Border had not received any advanced integrity training in over 2 years. Based on a survey of random samples of immigration inspectors, Border Patrol agents, and Customs inspectors assigned to the Southwest Border, we found that during fiscal years 1995 through 1997, 60 of 100 immigration inspectors agents received no advanced integrity training. In addition, 60 of 76 Border Patrol agents received no advanced integrity training during the almost 2 ½ – year period we examined. The Customs survey indicated that 24 of 100 Customs inspectors received no advanced integrity training during this period. The Departments of Justice and the Treasury have established procedures for handling allegations of employee misconduct. Misconduct allegations arise from numerous sources, including confidential informants, cooperating witnesses, anonymous tipsters, and whistle-blowers. For example, whistle-blowers can report alleged misconduct through the agencies’ procedures for reporting any suspected wrongdoing. INS and Customs have policies that require employees to report suspected wrongdoing. We selected five Justice OIG procedures to evaluate compliance with the processing of employee misconduct allegations. In a majority of the cases we reviewed, the Justice OIG complied with its procedures for receiving, investigating, and resolving drug-related employee misconduct allegations. For example, monthly interim reports were prepared as required in 28 of 39 opened cases we reviewed. In the remaining 11 cases, either some information was missing in interim reports or there were no interim reports in the case file. INS’ Office of Internal Audit complied with its procedures for receiving and resolving employee misconduct allegations in all of its cases. Because Customs’ Office of Internal Affairs’ automated case management system did not track adherence to Customs’ processing requirements, we could not readily determine if the Office of Internal Affairs staff complied with their investigative procedures. Customs’ automated system is the official investigative record. It tracks and categorizes misconduct allegations and resulting investigations and disciplinary action. The investigative case files are to support the automated system in tracking criminal investigative activity and contain such information as printed records from the automated system, copies of subpoenas and arrest warrants, and a chronology of investigative events. Based on these content criteria and our file reviews, the investigative case files are not intended to and generally do not document the adherence to processing procedures. Our analysis of the 28 closed cases revealed that drug-related corruption in these cases was not restricted to any one type, location, agency, or job. Corruption occurred in many locations and under various circumstances and times, underscoring the need for comprehensive integrity procedures that are effective. The cases also represented an opportunity to identify internal control weaknesses. The 28 INS and Customs employees engaged in one or more drug-related criminal activities, including waving drug-laden vehicles through ports of entry, coordinating the movement of drugs across the Southwest Border, transporting drugs past Border Patrol checkpoints, selling drugs, and disclosing drug intelligence information. The 28 convicted employees (19 INS employees and 9 Customs employees) were stationed at various locations on the Southwest Border. Six each were stationed in El Paso, TX, and Calexico, CA; four were stationed in Douglas, AZ; three were stationed in San Ysidro, CA; two each were stationed in Hidalgo, TX, and Los Fresnos, TX; and one each was stationed in Naco, AZ, Chula Vista, CA, Bayview, TX, Harlingen, TX, and Falfurrias, TX. The 28 INS and Customs employees who were convicted for drug-related crimes included 10 immigration inspectors, 7 Customs inspectors, 6 Border Patrol agents, 3 INS Detention Enforcement Officers (DEO), 1 Customs canine enforcement officer, and 1 Customs operational analysis specialist. All but the three had anti-drug smuggling responsibilities. Twenty-six of the convicted employees were men; 2 were women. The employment histories of the convicted employees varied substantially. In 19 cases, the employees acted alone, that is, no other INS or Customs employees were involved in the drug-related criminal activity. In the remaining nine cases, two or more INS and/or Customs employees acted together. Of the 28 cases, 23 originated from information provided by confidential informants or cooperating witnesses, and 5 cases originated from information provided by agency whistle-blowers. Prison sentences for the convicted employees ranged from 30 days, for disclosure of confidential information, to life imprisonment for drug conspiracy, money laundering, and bribery. The average sentence was about 10 years. Both the Justice OIG and Customs procedures require them to formally report internal control weaknesses identified during investigations, including drug-related corruption investigations involving INS and Customs employees. Generally, the Justice OIG and Customs’ Office of Internal Affairs, respectively, have lead responsibility for investigating criminal allegations involving INS and Customs employees. Reports of internal control weaknesses are to identify any lessons to be learned that can be used to prevent further employee corruption. The reports are to be forwarded to agency officials who are responsible for taking corrective action. Reports are not required if no internal control weaknesses are identified. In the 28 cases involving INS or Customs employees who were convicted for drug-related crimes in fiscal years 1992 through 1997, no reports were prepared. We concluded from this that either (1) there were no internal control weaknesses revealed by, or lessons to be learned from, these corruption cases or (2) opportunities to identify and correct internal control weaknesses have been missed, and thus INS’ and Customs’ vulnerability to employee corruption has not been reduced. Justice’s OIG investigated 13 of the 28 cases. The investigative files did not document whether procedures were reviewed to identify internal control weaknesses. Further, there were no reports identifying internal control weaknesses. According to a Justice OIG official, no reports are required if no weaknesses are identified, and he could not determine why reports were not prepared in these cases. Customs’ Office of Internal Affairs’ Internal Affairs Handbook provides for the preparation of a procedural deficiency report in those internal investigations where there was a significant failure that resulted from (1) failure to follow an established procedure, (2) lack of an established procedure, or (3) conflicting or obsolete procedures. The report is to detail the causal factors and scope of the deficiency. We identified eight cases involving Customs employees investigated by Customs’ Office of Internal Affairs. No procedural deficiency reports were prepared in these cases. Further, the investigative files did not document whether internal control weaknesses were identified. A Customs official said the reports are generally not prepared. Although the Justice OIG and Customs’ Office of Internal Affairs have lead responsibility for investigating allegations involving INS and Customs employees, the FBI is authorized to investigate INS or Customs employees. Of the 28 cases, the FBI investigated 7, involving 6 INS employees and 1 Customs employee. Under current procedures, the FBI is not required to provide the Justice OIG or Customs’ Office of Internal Affairs with case information that would allow them to identify internal control weaknesses, where the FBI investigation involves an INS or Customs employee. In addition, while Attorney General memorandums require the FBI to identify and report any internal control weaknesses identified during white-collar or health care fraud investigations, a Justice Department official told us that these reporting requirements do not apply to drug-related corruption cases. According to FBI officials, no reports were prepared in the seven cases because they were not required. The Justice OIG and Customs did not identify and report any internal control weaknesses involving the procedures that were followed at the ports of entry and at Border Patrol checkpoints along the Southwest Border. Our review of the same cases identified several weaknesses. corruption. These have included the random assignment and shifting of inspectors from one lane to another and the unannounced inspection of a group of vehicles. However, in the cases we reviewed, these internal controls did not prevent corrupt INS and Customs personnel from allowing drug-laden vehicles to enter the United States. In some cases, the inspectors communicated their lane assignment and the time they would be on duty to the drug smuggler, and in other cases, they did not. In one case, for example, an inspector used a cellular telephone to send a prearranged code to a drug smuggler’s beeper to tell him which lane to use and what time to use it. In contrast, another inspector did not notify the drug smuggler concerning his lane assignment or the times he would be on duty. In that case, the drug smuggler used an individual, referred to as a spotter, to conduct surveillance of the port of entry. The spotter used a cellular telephone to contact the driver of the drug-laden vehicle to tell him which lane to drive through. The drug smugglers’ schemes succeeded in these cases because the drivers of the drug-laden vehicles could choose the lane they wanted to use for inspection purposes. These cases support the implementation of one or more methods to deprive drivers of their choice of inspection lanes at ports of entry. At the time of our review, Customs was testing a method to assign drivers to inspection lanes at ports of entry. In 10 of 28 cases, drug smugglers relied on friendships, personal relationships, or symbols of law enforcement authority to move drug loads through a port of entry or past a Border Patrol checkpoint. In these 10 cases, drug smugglers believed that coworkers, relatives, and friends of Customs or immigration inspectors, or law enforcement officials, would not be inspected or would be given preferential treatment in the inspection process. For example, a Border Patrol agent relied on his friendships with his coworkers to avoid inspection at a Border Patrol checkpoint where he was stationed. In another case, an inspector agreed to allow her boyfriend to smuggle drugs through a port of entry. The boyfriend used his personal and intimate relationship with the inspector to solicit drug shipments from drug dealers. Two DEOs working together used INS detention buses and vans to transport drugs past a Border Patrol checkpoint. In two separate cases, former INS employees relied on friendships they had developed during their tenure with the agency to smuggle drugs through ports of entry and past Border Patrol checkpoints. inspected is such that they may not objectively perform the inspection. Nor do they have a written inspection policy for law enforcement officers or their vehicles. For example, our review of the cases determined that, on numerous occasions, INS DEOs drove INS vehicles with drug loads past Border Patrol checkpoints without being inspected. INS and Customs have not evaluated the effectiveness of their integrity assurance procedures to identify areas that could be improved. According to Justice OIG, INS, and Customs officials, agency integrity procedures have not been evaluated to determine if they are effective. The Acting Deputy Commissioner of Customs said that there were no evaluations of the effectiveness of Customs integrity procedures. Similarly, officials in INS’ Offices of Internal Audit and Personnel Security said that there were no evaluations of the effectiveness of INS’ integrity procedures. According to the Justice Inspector General, virtually no work had been done to review closed corruption cases or interview convicted employees to identify areas of vulnerability. Based on our review, one way to evaluate the effectiveness of agency integrity procedures would be to use drug-related investigative case information. For example, the objective of background investigations or reinvestigations is to determine an individual’s suitability for employment, including whether he or she has the required integrity. All 28 of the INS and Customs employees who were convicted for drug-related crimes received background investigations or reinvestigations that determined they were suitable. According to INS and Customs security officials, financial information, required to be provided by employees as part of their background investigations or reinvestigations, is to be used to determine whether they appear to be living beyond their means, or have unsatisfied debts. If either of these issues arises, it must be satisfactorily resolved before INS or Customs can determine that the employee is suitable. In addition, Justice policy provides for the temporary removal of immigration inspectors and Border Patrol agents if they are unable and/or unwilling to satisfy their debts. not been paid. They were not required to provide information on their assets. In comparison, Customs inspectors and canine enforcement officers were required to provide information on both their assets and liabilities, including financial information for themselves and their immediate families on their bank accounts, automobiles, real estate, securities, safe deposit boxes, business investments, art, boats, antiques, inheritance, mortgage, and debts and obligations exceeding $200. Our review of the 28 cases involving convicted INS and Customs employees disclosed that 26 of 28 employees were offered or received financial remuneration for their illegal acts. At least two were substantially indebted, and at least four were shown to be living beyond their means. For example, one of the closed cases we reviewed involved an immigration inspector who said he became involved with a drug smuggler because he had substantial credit card debt and was on the verge of bankruptcy. Given the limited financial information immigration inspectors are required to provide, this inspector might not have been identified as a potential risk. In another case, a mid-level Border Patrol agent owned a house valued at approximately $200,000, an Olympic-sized swimming pool in its own separate building, a 5-car garage, 5 automobiles, 1 van, 2 boats, approximately 100 weapons, $45,000 in treasury bills, 40 acres of land, and had no debt. Given the current background investigation or reinvestigation financial reporting requirements for Border Patrol agents, this agent would not have had anything to report, since he was not required to report his assets, and he had no debts to report. Our review of Customs files for eight of the nine convicted Customs employees showed that the Customs inspectors and canine enforcement officers had completed financial disclosure statements that included their assets and liabilities as part of their employee background investigations and reinvestigations. However, based on our case file review, Customs does not fully use all of the financial information. For example, according to a Customs official, reported liabilities are to be compared with debts listed on a credit report to determine if all debts were reported. Thus, their current use of the reported financial information would not have helped to identify an employee who was living well beyond his means or whose debts were excessive. Another source of evaluative information for INS and Customs could be the experiences of other federal agencies with integrity prevention and detection policies and procedures. For example, while INS’ and Customs’ procedures were similar to those used by other federal law enforcement agencies, several differences exist. According to agency officials, INS and Customs did not require advanced integrity training, polygraph examinations, or panel interviews before hiring, while the FBI, DEA, and Secret Service did have these requirements. Among the five agencies, only DEA required new employees to be assigned to a mentor to reinforce agency values and procedures. Since these policies and procedures are used by other agencies, they may be applicable to INS and Customs. During our review, the Justice OIG, INS, the Treasury OIG, and Customs began to review their anticorruption efforts. These efforts have not been completed, and it is too early to determine what their outcomes will be. require the Justice OIG to document that policies and procedures were reviewed to identify internal control weaknesses in cases where an INS employee is determined to have engaged in drug-related criminal activities; and require the Director of the FBI to develop a procedure to provide information from closed FBI cases, involving INS or Customs employees, to the Justice OIG or Customs’ Office of Internal Affairs so they can identify and report internal control weaknesses to the responsible agency official. The procedure should apply in those cases where (1) the Justice OIG or Customs’ Office of Internal Affairs was not involved in the investigation, (2) the subject of the investigation was an INS or Customs employee, and (3) the employee was convicted of a drug-related crime. require that Customs fully review financial disclosure statements, which employees are required to provide as part of the background investigation or reinvestigation process, to identify financial issues, such as employees who appear to be living beyond their means. The Department of Justice generally agreed with the substance of the report and recognized the importance of taking all possible actions to reduce the potential for corruption. However, Justice expressed reservations about implementing two of the six recommendations addressed to the Attorney General. First, Justice expressed reservations about implementing our recommendation that Border Patrol agents and immigration inspectors file financial disclosure statements as part of their background investigations or reinvestigations. Specifically, it noted that implementing financial disclosure “has obstacles to be met and at present the DOJ has limited data to suggest that they would provide better data or greater assurance of a person’s integrity.” We recognized that implementation of this recommendation will require some administrative actions by INS. However, these actions are consistent with the routine management practices associated with making policy changes within the agency. Therefore, the obstacles do not appear to be inordinate or insurmountable. Concerning the limited data about the benefits of financial reporting, according to OPM officials and the adjudication manual for background investigations and reinvestigations, financial information can have a direct bearing and impact on determining an individual’s integrity. The circumstances described in our case studies suggest that financial reporting could have raised issues for follow-up during a background investigation or reinvestigation. We recognize that there may be questions on the effectiveness of this procedure; therefore, this report contains a recommendation for an overall evaluation of INS’ integrity assurance efforts. those agencies, then the agencies are not in the best position to correct the abuses. The Department of the Treasury provided comments from Customs that generally concurred with our recommendations and indicated that it is taking steps to implement them. However, Customs requested that we reconsider our recommendation that Customs fully review financial disclosure statements that are provided as part of the background and reinvestigation process. Our recommendation expected Customs to make a more thorough examination of the financial information it collects to determine if employees appear to be living beyond their means. We leave it to Customs’ discretion to determine the type of examination to be performed. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions that you or other Members of the Caucus may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the threat of corruption to Immigration and Naturalization Service (INS) and Customs Service employees along the Southwest Border, focusing on: (1) the extent to which INS and the Customs Service have and comply with policies and procedures for ensuring employee integrity; (2) an identification and comparison of the Departments of Justice's and the Treasury's organizational structures, policies, and procedures for handling allegations of drug-related employee misconduct and whether the policies and procedures are followed; (3) an identification of the types of illegal drug-related activities in which INS and Customs employees on the Southwest Border have been convicted; and (4) the extent to which lessons learned from corruption cases closed in fiscal years 1992 through 1997 have led to changes in policies and procedures for preventing the drug-related corruption of INS and Customs employees. GAO noted that: (1) some INS and U.S. Customs Service employees on the Southwest Border have engaged in a variety of illegal drug-related activities, including waving drug loads through ports of entry, coordinating the movement of drugs across the Southwest Border, transporting drugs past Border Patrol checkpoints, selling drugs, and disclosing drug intelligence information; (2) both INS and Customs have policies and procedures designed to help ensure the integrity of their employees; (3) however, neither agency is taking full advantage of its policies and procedures and the lessons to be learned from closed corruption-cases; (4) the policies and procedures consist mainly of mandatory background investigations for new staff and 5-year reinvestigations of employees, as well as basic integrity training; (5) while the agencies generally completed required background investigations for new hires by the end of their fist year on the job, reinvestigations were typically overdue, in some instances by as many as 3 years; (6) both INS and Customs provided integrity training to new employees during basic training, but advanced integrity training was not required; (7) Justice and Treasury have different organizational structures but similar policies and procedures for handling allegations of drug-related misconduct; (8) at Justice, the Office of the Inspector General is generally responsible for investigating criminal allegations against INS employees; (9) GAO found that the Justice OIG generally complied with its policies and procedures for handling allegations of drug-related misconduct; (10) at Treasury, Customs' Office of Internal Affairs (OIA) is generally responsible for investigating both criminal and noncriminal allegations against Customs employees; (11) Customs' automated case management system and its investigative case files did not provide the necessary information to assess compliance with investigative procedures; (12) INS and Customs have missed opportunities to learn lessons and change their policies and procedures for preventing drug-related corruption of their employees; (13) the Justice OIG and Customs' OIA are required to formally report internal control weaknesses identified from closed corruption cases, but have not done so; (14) GAO's review of 28 cases involving INS and Customs employees assigned to the Southwest Border, who were convicted of drug-related crimes in fiscal years 1992 through 1997, revealed internal control weaknesses that were not formally reported; and (15) INS and Customs had not formally evaluated their integrity procedures to determine their effectiveness. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
In May 2007, the Army issued a solicitation for body armor designs to replenish stocks and to protect against future threats by developing the next generation (X level) of protection. According to Army officials, the solicitation would result in contracts that the Army would use for sustainment of protective plate stocks for troops in Iraq and Afghanistan. The indefinite delivery/indefinite quantity contracts require the Army to purchase a minimum of 500 sets per design and allow for a maximum purchase of 1.2 million sets over the 5-year period. The Army’s solicitation, which closed in February 2008, called for preliminary design models in four categories of body armor protective plates: Enhanced Small Arms Protective Insert (ESAPI)—plates designed to same protection specifications as those currently fielded and to fit into currently fielded Outer Tactical Vests. Flexible Small Arms Protective Vest-Enhanced (FSAPV-E)—flexible armor system designed to same protection specifications as armor currently fielded. Small Arms Protective Insert-X level (XSAPI)—next-generation plates designed to defeat higher level threat. Flexible Small Arms Protective Vest-X level (FSAPV-X)—flexible armor system designed to defeat higher level threat. In figure 1, we show the ESAPI plates inside the Outer Tactical Vest. Between May of 2007 and February of 2008 the Army established testing protocols, closed the solicitation, and provided separate live-fire demonstrations of the testing process to vendors who submitted items for testing and to government officials overseeing the testing. Preliminary Design Model testing was conducted at Aberdeen Test Center between February 2008 and June 2008 at an estimated cost of $3 million. Additionally, over $6 million was spent on infrastructure and equipment improvements at Aberdeen Test Center to support future light armor test range requirements, including body armor testing. First Article Testing was then conducted at Aberdeen Test Center from November 10, 2008, to December 17, 2008, on the three ESAPI and five XSAPI designs that had passed Preliminary Design Model testing. First Article Testing is performed in accordance with the Federal Acquisition Regulation to ensure that the contractor can furnish a product that conforms to all contract requirements for acceptance. First Article Testing determines whether the proposed product design conforms to contract requirements before or in the initial stage of production. During First Article Testing, the proposed design is evaluated to determine the probability of consistently demonstrating satisfactory performance and the ability to meet or exceed evaluation criteria specified in the purchase description. Successful First Article Testing certifies a specific design configuration and the manufacturing process used to produce the test articles. Failure of First Article Testing requires the contractor to examine the specific design configuration to determine the improvements needed to correct the performance of subsequent designs. Testing of the body armor currently fielded by the Army was conducted by private NIJ-certified testing facilities under the supervision of PEO Soldier. According to Army officials, not a single death can be attributed to this armor’s failing to provide the required level of protection for which it was designed. However, according to Army officials, one of the body armor manufacturers that had failed body armor testing in the past did not agree with the results of the testing and alleged that the testers tested that armor to higher–than–required standards. The manufacturer alleged a bias against its design and argued that its design was superior to currently fielded armor. As a result of these allegations and in response to congressional interest, after the June 2007 House Armed Services Committee hearing, the Army accelerated completion of the light armor ranges to rebuild small arms ballistic testing capabilities at Aberdeen Test Center and to conduct testing under the May 2007 body armor solicitation there, without officials from PEO Soldier supervising the testing. Furthermore, the decision was made to allow Aberdeen Test Center, which is not an NIJ-certified facility, to be allowed to conduct the repeated First Article Testing. In February 2009 the Army directed that all future body armor testing be performed at Aberdeen Test Center. According to Army officials, as of this date, none of the body armor procured under the May 2007 solicitation had been fielded. Given the significant congressional interest in the testing for this solicitation and that these were the first small arms ballistic tests conducted at Aberdeen Test Center in years, multiple defense organizations were involved in the Preliminary Design Model testing. These entities include the Aberdeen Test Center, which conducted the testing; PEO Soldier, which provided the technical subject-matter experts; and DOD’s office of the Director of Operational Test and Evaluation, which combined to form the Integrated Product The Integrated Product Team was responsible for developing and approving the test plans used for the Preliminary Design Model testing and First Article Testing. Figure 2 shows a timeline of key Preliminary Design Model testing and First Article Testing events. The test procedures to be followed for Preliminary Design Model testing were established and identified in the purchase descriptions accompanying the solicitation announcement and in the Army’s detailed test plans (for each of the four design categories), which served as guidance to Army testers and were developed by the Army Test and Evaluation Command and approved by PEO-Soldier, DOD’s office of the Director of Operational Test and Evaluation, and others. Originally, PEO Soldier required that testing be conducted at an NIJ-certified facility. Subsequently, the decision was made to conduct testing at Aberdeen Test Center, which is not NIJ-certified. The test procedures for both Preliminary Design Model testing and First Article Testing included both (1) physical characterization steps performed on each armor design to ensure they met required specifications, which included measuring weight, thickness, curvature, and size and (2) ballistic testing performed on each design. Ballistics testing for this solicitation included the following subtests: (1) ambient testing to determine whether the designs can defeat the multiple threats assigned in the respective solicitation’s purchase descriptions 100 percent of the time; (2) environmental testing of the designs to determine whether they can defeat each threat 100 percent of the time after being exposed to nine different environmental conditions; and (3) testing, called V50 testing, to determine whether designs can defeat each threat at velocities significantly higher than those present or expected in Iraq or Afghanistan at least 50 percent of the time. Ambient and environmental testing seek to determine whether designs can defeat each threat 100 percent of the time by both prohibiting the bullet from penetrating through the plate and by prohibiting the bullet from causing too deep of an indentation in the clay backing behind the plate. Preventing a penetration is important because it prevents a bullet from entering the body of the soldier. Preventing a deep indentation in the clay (called “back-face deformation”) is important because the depth of the indentation indicates the amount of blunt force trauma to the soldier. Back-face deformation deeper than 43 millimeters puts the soldier at higher risk of internal injury and death. The major steps taken in conducting a ballistic subtest include: 1. For environmental subtests, the plate is exposed to the environmental condition tested (e.g., impact test, fluid soaks, temperature extremes, etc.). 2. The clay to be used to back the plate is formed into a mold and is placed in a conditioning chamber for at least 3 hours. 3. The test plate is placed inside of a shoot pack. 4. The clay is taken out of the conditioning chamber. It is then tested to determine if it is suitable for use and, if so, is placed behind the test plate. 5. The armor and clay are then mounted to a platform and shot. 6. If the shot was fired within required specifications, the plate is examined to determine if there is a complete or partial penetration, and the back-face deformation is measured. 7. The penetration result and back-face deformation are scored as a pass, a limited failure, or a catastrophic failure. If the test is not conducted according to the testing protocols, it is scored as a no-test. Following are significant steps the Army took to run a controlled test and maintain consistency throughout Preliminary Design Model testing: The Army developed testing protocols for the hard-plate (ESAPI and XSAPI) and flexible-armor (FSAPV-E and FSAPV-X) preliminary design model categories in 2007. These testing protocols were specified in newly created purchase descriptions, detailed test plans, and other documents. For each of the four preliminary design model categories, the Army developed new purchase descriptions to cover both hard-plate and flexible designs. These purchase descriptions listed the detailed requirements for each category of body armor in the solicitation issued by the Army. Based on these purchase descriptions, the Army developed detailed test plans for each of the four categories of body armor. These detailed test plans provided additional details on how to conduct testing and provided Army testers with the requirements that each design needed to pass. After these testing protocols were developed, Army testers then conducted a pilot test in which they practiced test activities in preparation for Preliminary Design Model testing, to help them better learn and understand the testing protocols. The Army consistently documented many testing activities by using audio, video, and other electronic means. The use of cameras and microphones to provide 24-hour video and audio surveillance of all of the major Preliminary Design Model testing activities provided additional transparency into many testing methods used and allowed for enhanced oversight by Army management, who are unable to directly observe the lanes on a regular basis but who wished to view select portions of the testing. The Army utilized an electronic database to maintain a comprehensive set of documentation for all testing activities. This electronic database included a series of data reports and pictures for each design including: physical characterization records, X-ray pictures, pre- and post-shot pictures, ballistics testing results, and details on the condition of the clay backing used for the testing of those plates. The Army took a number of additional actions to promote a consistent and unbiased test. For example, the Army disguised vendor identity for each type of solution by identifying vendors with random numbers to create a blind test. The Army further reduced potential testing variance by shooting subtests in the same shooting lane. The Army also made a good faith effort to use consistent and controlled procedures to measure the weight, thickness, and curvature of the plates. Additionally, the Army made extensive efforts to consistently measure and maintain room temperature and humidity within desired ranges. We also observed that projectile yaw was consistently monitored and maintained. We also found no deviations in the monitoring of velocities for each shot and the re-testing of plates in cases where velocities were not within the required specifications. We observed no instances of specific bias against any design, nor did we observe any instances in which a particular vendor was singled out for advantage or disadvantage. We identified several instances in which the Aberdeen Test Center did not follow established testing protocols. For example, during V50 testing, testers failed to properly adjust shot velocities. V50 testing is conducted to discern the velocity at which 50 percent of the shots of a particular threat would penetrate each of the body armor designs. The testing protocols require that after every shot that is defeated by the body armor the velocity of the next shot be increased. Whenever a shot penetrates the armor, the velocity should be decreased for the next shot. This increasing and decreasing of the velocities is supposed to be repeated until testers determine the velocity at which 50 percent of the shots will penetrate. In cases in which the armor far exceeds the V50 requirements and is able to defeat the threat for the first six shots, the testing may be halted without discerning the V50 for the plate, and the plate is ruled as passing the requirements. During Preliminary Design Model testing, in cases in which plates defeated the first three shots, Army testers failed to increase shot velocities, but rather continued to shoot at approximately the same velocity or lower for shots four, five, and six in order to obtain six partial penetrations and conclude the test early. Army officials told us that this deviation was implemented by Aberdeen Test Center to conserve plates for other tests that needed repeating as a result of no-test events, according to Aberdeen Test Center officials—but was a practice not described in the protocols. Army officials told us that this practice had no effect on which designs passed or failed; however, this practice made it impossible to discern the true V50s for these designs and was a deviation from the testing protocols that require testers to increase velocities for shots after the armor defeats the threat. In another example, Aberdeen Test Center testers did not consistently follow testing protocols in the ease-of-insertion test. According to the testing protocols, one barehanded person shall demonstrate insertion and removal of the ESAPI/XSAPI plates in the Outer Tactical Vest pockets without tools or special aids. Rather than testing the insertion of both the front and the rear pockets as required, testers only tested the ability to insert into the front pocket. Testing officials told us that they did not test the ability to insert the plates into the rear pocket because they were unable to reach the rear pocket while wearing the Outer Tactical Vest. The cause for this deviation is that the testers misinterpreted the testing protocols, as there is no requirement in the established testing protocols to wear the Outer Tactical Vest when testing the ability to insert the plates in the rear pocket of the Outer Tactical Vest. Officials from PEO Soldier told us that, had they been present to observe this deviation during testing, they would have informed testers that the insertion test does not require that the Outer Tactical Vest be worn, which would have resulted in testers conducting the insertion test as required. According to Aberdeen Test Center officials, this violation of the testing protocols had no impact on test results. While we did not independently verify this assertion, Aberdeen Test Center officials told us that the precise physical characterization measurements of the plate’s width and dimensions are, alone, sufficient to ensure the plate will fit. In addition, testers deviated from the testing protocols by placing shots at the wrong location on the plate. The testing protocols require that the second shot for one of the environmental sub-tests, called the impact test, be taken approximately 1.5 inches from the edge of the armor. However, testers mistakenly aimed closer to the edge of the armor for some of the designs tested. Army officials said that the testing protocols were unclear for this test because they did not prescribe a specific hit zone (e.g., 1.25 – 1.75 inches), but rather relied upon testers’ judgment to discern the meaning of the word “approximately.” One of the PEO Soldier technical advisors on the Integrated Product Team told us he was contacted by the Test Director after the plates had been shot and asked about the shot location. He told us that he informed the Test Director that the plates had been shot in the wrong location. The PEO Soldier Technical advisor told us that, had he been asked about the shot location before the testing was conducted, he could have instructed testers on the correct location at which to shoot. For 17 of the 47 total designs that we observed and measured, testers marked target zones that were less than the required 1.5 inches from the plate’s edge, ranging from .75 inches to 1.25 inches from the edge. Because 1.5 inches was outside of the marked aim area for these plates, we concluded that testers were not aiming for 1.5 inches. For the remaining 30 designs tested that we observed and measured, testers used a range that included 1.5 inches from the edge (for example, aiming for 1 to 1.5 inches). It is not clear what, if any, effect this deviation had on the overall test results. While no design failed Preliminary Design Model testing due to the results of this subtest, there is no way to determine if a passing design would have instead failed if the testing protocol had been correctly followed. However, all designs that passed this testing were later subject to First Article Testing, where these tests were repeated in full using the correct shot locations. Of potentially greater consequence to the final test results is our observation of deviations from testing protocols regarding the clay calibration tests. According to testing protocols, the calibration of the clay backing material was supposed to be accomplished through a series of pre-test drops. The depths of the pre-test drops should have been between 22 and 28 millimeters. Aberdeen Test Center officials told us that during Preliminary Design Model testing they did not follow a consistent system to determine if the clay was conditioned correctly. According to Aberdeen Test Center officials, in cases in which pre-test drops were outside the 22- to 28-millimeter range, testers would sometimes repeat one or all of the drops until the results were within range—thus resulting in the use of clay backing materials that should have been deemed unacceptable for use. These inconsistencies occurred because Army testers in each test lane made their own, sometimes incorrect, interpretation of the testing protocols. Members of the Integrated Product Team expressed concerns about these inconsistencies after they found out how calibrations were being conducted. In our conversations with Army and private body armor testing officials, consistent treatment and testing of clay was identified as critical to ensure consistent, accurate testing. According to those officials if the clay is not conditioned correctly it will impact the test results. Given that clay was used during Preliminary Design Model testing that failed the clay calibration tests, it is possible that some shots may have been taken under test conditions different than those stated in the testing protocols, potentially impacting test results. Figure 3 shows an Army tester calibrating the clay with pre-test drops. The most consequential of the deviations from testing protocols we observed involved the measurement of back-face deformation, which did affect final test results. According to testing protocol, back-face deformation is to be measured at the deepest point of the depression in the clay backing. This measure indicates the most force that the armor will allow to be exerted on an individual struck by a bullet. According to A rmy officials, the deeper the back-face deformation measured in the cla backing, the higher the risk of internal injury or death. During approximately the first one-third of testing, however, Army testers incorrectly measured deformation at the point of aim, rather than at the deepest point of depression. This is significant because, in many instances, measuring back-face deformation at the point of aim results in measuri ng at a point upon which less ballistic force is exerted, resulting in lower back-face deformation measurements and overestimating the effectiveness of the armor. The Army’s subject matter experts on the Integrated Product Team were not on the test lanes during testing and thus not made aware of the error until approximately one-third of the testing had been completed. When members of the Integrated Product Team overseeing the testing were made aware of this error, the Integrated Product Team decided to begin measuring at the deepest point of depression. When senior Army leadership was made aware of this error, testing was halted for 2 weeks while Army leadership considered the situation. Army leadership developed many courses of action, including restarting the entire Preliminary Design Model testing with new armor plate submissions, but ultimately decided to continue measuring and scoring officially at the point of aim, since this would not disadvantage any vendors. The Army then changed the test plans and modified the contract solicitation to call for measuring at the point of aim. The Army also decided to collect deepest point of depression measurements for all shots from that point forward, but only as a government reference. During the second two-thirds of testing, we observed significant differences between the measurements taken at the point of aim and those taken at the deepest point, as much as a 10-millimeter difference between measurements. As a result, at least two of the eight designs that passed Preliminary Design Model testing and were awarded contracts would have failed if the deepest point of depression measurement had been used. Figures 4 and 5 illustrate the difference between the point of aim and the deepest point. Before Preliminary Design Model testing began at Aberdeen Test Center, officials told us that Preliminary Design Model testing was specifically designed to meet all the requirements of First Article Testing. However, Preliminary Design Model testing failed to meet its goal of determining which designs met requirements, because of the deviations from established testing protocols described earlier in this report. Those deviations were not reviewed or approved by officials from PEO Soldier, the office of the Director of Operational Test and Evaluation, or by the Integrated Product Team charged with overseeing the test. PEO Soldier officials told us that the reason for a lack of PEO Soldier on-site presence during this testing was because of a deliberate decision made by PEO Soldier management to be as removed from the testing process as possible in order to maximize the independence of the Aberdeen Test Center. PEO Soldier officials told us that it was important to demonstrate the independence of the Aberdeen Test Center to quash allegations of bias made by a vendor whose design had failed prior testing and that this choice may have contributed to some of the deviations not being identified by the Army earlier during testing. After the conclusion of Preliminary Design Model testing, PEO Soldier officials told us that they should have been more involved in the testing and that they would be more involved in future testing. After the completion of Preliminary Design Model testing, the Commanding General of PEO Soldier said that, as the Milestone Decision Authority for the program, he elected to repeat the testing conducted during Preliminary Design Model testing through First Article Testing before any body armor was fielded based on the solicitation. According to PEO Soldier officials, at the beginning of Preliminary Design Model testing, there was no intention or plan to conduct First Article Testing following contract awards given that the Preliminary Design Model testing was to follow the First Article Testing protocol. However, because of the fact that back-face deformation was not measured to the deepest point, PEO-Soldier and Army Test and Evaluation and Command acknowledged that there was no longer an option of forgoing First Article Testing. PEO Soldier also expressed concerns that Aberdeen Test Center test facilities have not yet demonstrated that they are able to test to the same level as NIJ-certified facilities. However, officials from Army Test and Evaluation Command and DOD’s office of the Director of Operational Test and Evaluation asserted that Aberdeen Test Center was just as capable as NIJ- certified laboratories, and Army leadership eventually decided that First Article Testing would be performed at Aberdeen. PEO Soldier maintained an on-site presence in the test lanes and the Army technical experts on the Integrated Product Team charged with testing oversight resolved the following problems during First Article Testing: The Army adjusted its testing protocols to clarify the required shot location for the impact test, and Army testers correctly placed these shots as required by the protocols. After the first few days of First Article Testing, in accordance with testing protocols, Army testers began to increase the velocity after every shot defeated by the armor required during V50 testing. As required by the testing protocols, Army testers conducted the ease- of-insertion tests for both the front and rear pockets of the outer protective vest, ensuring that the protective plates would properly fit in both pockets. The Army began to address the problems identified during Preliminary Design Model testing with the clay calibration tests and back-face deformation measurements. Army testers said they developed an informal set of procedures to determine when to repeat failed clay calibration tests. The procedures, which were not documented, called for repeating the entire series of clay calibration drops if one of the calibration drops showed a failure. If the clay passes either the first or second test, the clay is to be used in testing. If the clay fails both the first and the second series of drops, the clay is to then be placed back in conditioning and testers get a new block of clay. With respect to back-face deformation measurements, Army testers measured back-face deformation at the deepest point, rather than at the point of aim. Although the Army began to address problems relating to the clay calibration tests and back-face deformation measurements, Army testers still did not follow all established testing protocols in these areas. As a result, the Army may not have achieved the objective of First Article Testing—to determine if the designs tested met the minimum requirements for ballistic protection. First, the orally agreed-upon procedures used by Army testers to conduct the clay calibration tests were inconsistent with the established testing protocols. Second, with respect to back-face deformation measurements, Army testers rounded back-face deformation measurements to the nearest millimeter, a practice that was neither articulated in the testing protocols nor consistent with Preliminary Design Model testing. Third, also with respect to back-face deformation measurements, Army testers introduced a new, unproven measuring device. Although Army testers told us that they had orally agreed upon an informal set of procedures to determine when to repeat failed clay calibration tests, those procedures are inconsistent with the established testing protocols. The Army deviated from established testing protocols by using clay that had failed the calibration test as prescribed by the testing protocols. The testing protocols specify that a series of three pre-test drops of a weight on the clay must be within specified tolerances before the clay is used. However, in several instances, the Army repeated the calibration test on the same block of clay after it had initially failed until the results of a subsequent series of three drops were within the required specifications. Army officials told us that the testing protocols do not specify what procedures should be performed when the clay does not pass the first series of calibration drops, so Army officials stated they developed the procedure they followed internally prior to First Article Testing and provided oral guidance on those procedures to all test operators to ensure a consistent process. Officials we spoke with from the Army, private NIJ-certified laboratories, and industry had mixed opinions regarding the practice of re-testing failed clay, with some expressing concerns that performing a second series of calibration drops on clay that had failed might introduce risk that the clay may not be at the proper consistency for testing because as the clay rests it cools unevenly, which could affect the calibration. Aberdeen Test Center’s Test Operating Procedure states that clay should be conditioned so that the clay passes the clay calibration test, and Army officials, body armor testers from private laboratories, and body armor manufacturers we spoke to agreed that when clay fails the calibration test, this requires re-evaluation and sometimes adjustment of the clay calibration procedures used. After several clay blocks failed the clay calibration test on November 13, 2008, Army testers recognized that the clay conditioning process used was yielding clay that was not ideal and, as a result, Army testers adjusted their clay conditioning process by lowering the temperature at which the clay was stored. On that same day of testing, November 13, 2008, we observed heavy, cold rain falling on the clay blocks that were being transported to test lanes. These clay blocks had been conditioned that day in ovens located outside of the test ranges at temperatures above 100 degrees Fahrenheit to prepare them for testing, and then were transported outside uncovered on a cold November day through heavy rain on the way to the temperature- and humidity-controlled test lane. We observed an abnormally high level of clay blocks failing the clay calibration test and a significantly higher-than- normal level of failure rates for the plates tested on that day. The only significant variation in the test environment we observed that day was constant heavy rain throughout the day. Our analysis of test data also showed that 44 percent (4 of 9) of the first shots and 89 percent (8 of 9) of the second shots taken on November 13, 2008, resulted in failure penalties. On all of the other days of testing only 14 percent (10 of 74) of the first shots and 42 percent (31 of 74) of the second shots resulted in failure penalties. Both of these differences are statistically significant, and we believe the differences in the results may be attributable to the different test condition on that day. The established testing protocols require the use of a specific type of non-hardening oil-based clay. Body armor testers from NIJ-certified private laboratories, Army officials experienced in the testing of body armor, body armor manufacturers, and the clay manufacturer we spoke with said that the clay used for testing is a type of sculpting clay that naturally softens when heat is added and that getting water on the clay backing material could cause a chemical bonding change on the clay surface. Those we spoke with further stated that the cold water could additionally cause the outside of the clay to cool significantly more rapidly than the inside causing the top layer of clay to be harder than the middle. They suggested that clay be conditioned inside the test lanes and said that clay exposed to water or extreme temperature changes should not be used. Army Test and Evaluation Command officials we spoke with said that there is no prohibition in the testing protocols on allowing rain to fall onto the clay backing material and that its exposure to water would not impact testing. However, these officials were unable to provide data to validate their assertion that exposure to water would not affect the clay used during testing or the testing results. Army test officials also said that, since the conclusion of First Article Testing, Aberdeen Test Center has procured ovens to allow clay to be stored inside test lanes, rather than requiring that the clay be transported from another room where it would be exposed to environmental conditions, such as rain. With respect to the issue of the rounding of back-face deformation measurements, during First Article Testing Army testers did not award penalty points for shots with back-face deformations between 43.0 and 43.5 millimeters. This was because the Army decided to round back-face deformation measurements to the nearest millimeter—a practice that is inconsistent with the Army’s established testing protocols, which require that back-face deformation measurements in the clay backing not exceed 43 millimeters and that is inconsistent with procedures followed during Preliminary Design Model testing. Army officials said that a decision to round the measurements for First Article Testing was made to reflect testing for past Army contract solicitations and common industry practices of recording measurements to the nearest millimeter. While we did not validate this assertion that rounding was a common industry practice, one private industry ballistics testing facility said that its practice was to always round results up, not down, which has the same effect as not rounding at all. Army officials further stated that they should have also rounded Preliminary Design Model results but did not realize this until March 2008—several weeks into Preliminary Design Model testing—and wanted to maintain consistency throughout Preliminary Design Model testing. The Army’s decision to round measurement results had a significant outcome on testing because two designs that passed First Article Testing would have instead failed if the measurements had not been rounded. With respect to the introduction of a new device to measure back-face deformation, the Army began to use a laser scanner to measure back-face deformation without adequately certifying that the scanner could measure against the standard established when the digital caliper was used as the measuring instrument. Although Army Test and Evaluation Command certified the laser scanner as accurate for measuring back-face deformation, we observed the following certification issues: The laser was certified based on testing done in a controlled laboratory environment that is not similar to the actual conditions on the test lanes. For example, according to the manufacturer of the laser scanner, the scanner is operable in areas of vibration provided the area scanned and the scanning-arm are on the same plane or surface. This was not the case during testing, and thus it is possible the impact of the bullets fired may have thrown the scanner out of alignment or calibration. The certification is to a lower level of accuracy than required by the testing protocols. The certification study says that the laser is accurate to 0.2 millimeters; however, the testing protocols require an accuracy of 0.1 millimeters or better. Furthermore, the official letter from the Army Test and Evaluation Command certifying the laser for use incorrectly stated the laser meets an accuracy requirement of 1.0 millimeter rather than 0.1 millimeters as required by the protocols. Officials confirmed that this was not a typographical error. The laser certification was conducted before at least three major software upgrades were made to the laser, which according to Army officials may have significantly changed the accuracy of the laser. Because of the incorporation of the software upgrades, Army testers told us that they do not know the accuracy level of the laser as it was actually used in First Article Testing. In evaluating the use of the laser scanner, the Army did not compare the actual back-face deformation measurements taken by the laser with those taken by digital caliper, previously used during Preliminary Design Model testing and by NIJ-certified laboratories. According to vendor officials and Army subject matter experts, the limited data they had previously collected have shown that back-face deformation measurements taken by laser have generally been deeper by about 2 millimeters than those taken by digital caliper. Given those preliminary findings, there is a significant risk that measurements taken by the laser may represent a significant change in test requirements. Although Army testing officials acknowledged that they were unable to estimate the exact accuracy of the laser scanner as it was actually used during testing, they believed that based on the results of the certification study, it was suitable for measuring back-face deformation. These test officials further stated that they initially decided to use the laser because they did not believe it was possible to measure back-face deformations to the required level of accuracy using the digital caliper. However, officials from PEO Soldier and private NIJ-certified laboratories have told us that they believe the digital caliper method is capable of making these measurements with the required level of accuracy and have been using this technique successfully for several years. PEO Soldier officials also noted that the back-face deformation measurements in the testing protocols were developed using this digital caliper method. Army testing officials noted that the laser certification study confirmed their views that the laser method was more accurate than the digital caliper. However, because of the problems with the study that we have noted in this report, it is still unclear whether the laser is the most appropriate and accurate technique for measuring back-face deformation. Although we did not observe problems in the Army’s determination of penetration results during Preliminary Design Model testing, during First Article Testing we observed that the Army did not consistently follow its testing protocols in determining whether a shot was a partial or a complete penetration. Army testing protocols require that penalty points be awarded when any fragment of the armor material is imbedded or passes into the soft under garment used behind the plate; however, the Army did not score the penetration of small debris through a plate as a complete penetration of the plate in at least one case that we observed. In this instance, we observed small fragments from the armor three layers deep inside the Kevlar backing behind the plate. This shot should have resulted in the armor’s receiving 1.5 penalty points, which would have caused the design to fail First Article Testing. Army officials said that testers counted the shot as only a partial penetration of the plate because it was determined that fibers of the Kevlar backing placed behind the plate were not broken, which they stated was a requirement for the shot to be counted as a complete penetration of the plate. This determination was made with the agreement of an Army subject-matter expert from PEO- Soldier present on the lane. However, the requirement for broken fibers is inconsistent with the written testing protocols. Army officials acknowledged that the requirement for broken fibers was not described in the testing protocols or otherwise documented but said that Army testers discussed this before First Article Testing began. Figure 6 shows the tear in the fibers of the rear of the plate in question. Federal internal control standards require that federal agencies maintain effective controls over information processing to help ensure completeness, accuracy, authorization, and validity of all transactions. However, the Army did not consistently maintain adequate internal controls to ensure the integrity and reliability of its test data. For example, in one case bullet velocity data were lost because the lane Test Director accidentally pressed the delete button on the keyboard, requiring a test to be repeated. Additionally, we noticed that the software being used with the laser scanner to calculate back-face deformation measurements lacked effective edit controls, which could potentially allow critical variables to be inappropriately modified during testing. We further observed a few cases in which testers attempted to memorize test data for periods of time, rather than writing that data down immediately. In at least one case, this practice resulted in the wrong data being reported and entered into the test records. According to Army officials, decisions to implement those procedures that deviated from testing protocols were reviewed and approved by appropriate officials. However, these decisions were not formally documented, the testing protocols were not modified to reflect the changes, and vendors were not informed of the procedures. At the beginning of testing, the Director of Testing said that any change to the testing protocols has to be approved by several Army components; however, the Army was unable to produce any written documentation indicating approval of the deviations we observed by those components. With respect to internal control issues, Army officials acknowledged that before our review they were unaware of the specific internal control problems we identified. We noted during our review that in industry, as part of the NIJ certification process, an external peer review process is used to evaluate testing processes and procedures of ballistics testing facilities to ensure that effective internal controls are in place. However, we found that the Aberdeen Test Center has conducted no such reviews, a contributing factor to the Army’s lack of unawareness of the control problems we noted. As a result of the deviations from testing protocols that we observed, three of the five designs that passed First Article Testing would not have passed under the existing testing protocols. Furthermore, one of the remaining two designs that passed First Article Testing was a design that would have failed Preliminary Design Model testing if back-face deformation was measured in accordance with the established protocols for that test. Thus, four of the five designs that passed First Article Testing and were certified by the Army as ready for full production would have instead failed testing at some point during the process, either during the initial Preliminary Design Model testing or the subsequent First Article testing, if all the established testing protocols had been followed. As a result, the overall reliability and repeatability of the test results are uncertain. However, because ballistics experts from the Army or elsewhere have not assessed the impact of the deviations from the testing protocols we observed during First Article Testing, it is not certain whether the effect of these deviations is sufficient to call into question the ability of the armor to meet mission requirements. Although it is certain that some armor passed testing that would not have if specific testing protocols had been followed, it is unclear if there are additional factors that would mean the armor still meets the required performance specifications. For example, the fact that the laser scanner used to measure back-face deformation may not be as accurate as what the protocol requires may offset the effects of rounding down back-face deformations. Likewise, it is possible that some of the deviations that did not on their own have a visible effect on testing results could, when taken together with other deviations, have a combined effect that is greater. In our opinion, given the significant deviations in the testing protocols, independent ballistics testing expertise would be required to determine whether or not the body armor designs procured under this solicitation provide the required level of protection. The Army has ordered 2,500 sets of plates (at two plates per set) from those vendors whose designs passed First Article Testing to be used for additional ballistics testing and 120,000 sets of plates to be put into inventory to address future requirements. However, to date, none of these designs have been fielded because, according to Army officials, there are adequate quantities of armor plates produced under prior contracts already in the inventory to meet current requirements. Body armor plays a critical role in protecting our troops, and the testing inconsistencies we identified call into question the quality and effectiveness of testing performed at Aberdeen Test Center. Because we observed several instances in which actual test practices deviated from the established testing protocols, it is questionable whether the Army met its First Article Testing objectives of ensuring that armor designs fully met Army’s requirements before the armor is purchased and used in the field. While it is possible that the testing protocol deviations had no significant net effect or may have even resulted in armor being tested to a more rigorous standard, it is also possible that some deviations may have resulted in armor being evaluated against a less stringent standard than required. We were unable to determine the full effects of these deviations as they relate to the quality of the armor designs and believe such a determination should only be made based on a thorough assessment of the testing data by independent ballistics testing experts. In light of such uncertainty and the critical need for confidence in the equipment by the soldiers, the Army would take an unacceptable risk if it were to field these designs without taking additional steps to gain the needed confidence that the armor will perform as required. The Army is now moving forward with plans to conduct all future body armor testing at Aberdeen Test Center. Therefore, it is essential that the transparency and consistency of its program be improved by ensuring that all test practices fully align with established testing protocols and that any modifications in test procedures be fully reviewed and approved by the appropriate officials, with supporting documentation, and that the testing protocols be formally changed to reflect the revised or actual procedures. Additionally, it is imperative that all instrumentation, such as the laser scanner, used for testing be fully evaluated and certified to ensure its accuracy and applicability to body armor testing. Furthermore, it is essential that effective internal controls over data and testing processes be in place. The body armor industry has adopted the practice, through the NIJ certification program, of using external peer reviews to evaluate and improve private laboratories’ test procedures and controls. This type of independent peer review could be equally beneficial to the Aberdeen Test Center. Without all of these steps, there will continue to be uncertainty with regard to whether future testing data are repeatable and reliable and can be used to accurately evaluate body armor designs. Until Aberdeen Test Center has effectively honed its testing practices to eliminate the types of inconsistencies we observed, concerns will remain regarding the rigor of testing conducted at that facility. To determine what effect, if any, the problems we observed had on the test data and on the outcomes of First Article Testing, we recommend the Secretary of Defense direct the Secretary of the Army to provide for an independent evaluation of the First Article Testing results by ballistics and statistical experts external to DOD before any armor is fielded to soldiers under this contract solicitation and that the Army report the results of that assessment to the office of the Director of Operational Test and Evaluation and the Congress. In performing this evaluation, the independent experts should specifically evaluate the effects of the following practices observed during First Article Testing: the rounding of back-face deformation measurements; not scoring penetrations of material through the plate as a complete penetration unless broken fibers are observed in the Kevlar backing behind each plate; the use of the laser scanner to measure back-face deformations without a full evaluation of its accuracy as it was actually used during testing, to include the use of the software modifications and operation under actual test conditions; the exposure of the clay backing material to rain and other outside environmental conditions as well as the effect of high oven temperatures during storage and conditioning; and the use of an additional series of clay calibration drops when the first series of clay calibration drops does not pass required specifications. To better align actual test practices with established testing protocols during future body armor testing, we recommend that the Secretary of the Defense direct the Secretary of the Army to document all key decisions made to clarify or change the testing protocols. With respect to the specific inconsistencies we identified between the test practices and testing protocols, we recommend that the Secretary of the Army, based on the results of the independent expert review of the First Article Test results, take the following actions: Determine whether those practices that deviated from established testing protocols during First Article Testing will be continued during future testing and change the established testing protocols to reflect those revised practices. Evaluate and re-certify the accuracy of the laser scanner to the correct standard with all software modifications incorporated and include in this analysis a side-by-side comparison of the laser measurements of the actual back-face deformations with those taken by digital caliper to determine whether laser measurements can meet the standard of the testing protocols. To improve internal controls over the integrity and reliability of test data for future testing as well as provide for consistent test conditions and comparable data between tests, we recommend that the Secretary of Defense direct the Secretary of the Army to provide for an independent peer review of Aberdeen Test Center’s body armor testing protocols, facilities, and instrumentation to ensure that proper internal controls and sound management practices are in place. This peer review should be performed by testing experts external to the Army and DOD. DOD did not concur with our recommendation for an independent evaluation of First Article Testing results and accordingly plans to take no action to provide such an assessment. DOD asserted that the issues we identified do not alter the effects of testing. However, based on our analysis and findings there is sufficient evidence to raise questions as to whether the issues we identified had an impact on testing results. As a result, we continue to believe it is necessary to have an independent external expert review these test results and the overall effect of the testing deviations we observed on those results before any armor is fielded to military personnel. Without such an independent review, the First Article Test results remain questionable, undermining the confidence of the public and those who might rely on the armor for protection. Consequently, Congress should consider directing the Office of the Secretary of Defense to either require that an independent external review of these body armor test results be conducted or that DOD officially amend its testing protocols to reflect any revised test procedures and repeat First Article Testing to ensure that only properly tested designs are fielded. In written comments on a draft of this report, DOD takes the position that our findings had no significant impact on the test results and on the subsequent contracting actions taken by the Army. DOD also does not concur with what it perceives as our two overarching conclusions: (1) that Preliminary Design Model testing did not achieve its intended objective of determining, as a basis for contract awards, which designs met performance requirements and (2) that First Article Testing may not have met its objective of determining whether each of the contracted plate designs met performance requirements. DOD commented that it recognizes the importance of personal protection equipment such as body armor and provided several examples of actions DOD and the Army have taken to improve body armor testing. DOD generally concurred with our findings that there were deviations from the testing protocols during Preliminary Design Model testing and First Article Testing. We agree that DOD has taken positive steps to improve its body armor testing program and to address concerns raised by Congress and others. DOD also concurred with our second recommendation to document all key decisions made to clarify or change the testing protocols. DOD did not concur with our first recommendation that an independent evaluation of First Article Testing results be performed by independent ballistics and statistical experts before any of the armor is fielded to soldiers under contracts awarded under this solicitation. Similarly, DOD did not agree with our conclusions that Preliminary Design Model testing did not meet its intended objectives and that First Article Testing may not have met its intended objectives. In supporting its position, DOD cited, for example, that rounding back-face deformation measurements during First Article Testing was an acceptable test practice because rounding is a practice that has been used historically. It was the intent of PEO Soldier to round back- face deformations for all testing associated with this solicitation, and the Integrated Product Team decided collectively to round back-face deformations during First Article Testing. However, as stated in our report and acknowledged by DOD, the rounding down of back-face deformations was not spelled out or provided for by any of the testing protocol documents. Additionally, it created an inconsistency between Preliminary Design Model testing, where back-face deformations were not rounded down and in First Article Testing, where back-face deformations were rounded down. Of greatest consequence, rounding down back-face deformations lowered the requirements that solutions had to meet to pass testing. Two solutions passed First Article Testing because back-face deformations were rounded down, meaning that the Army may be taking unacceptable risk if plates are fielded without an additional, independent assessment by experts. DOD also did not agree with our finding that a penetration of a plate was improperly scored. DOD did agree that figure 6, which shows the tear in the Kevlar fibers of the rear of the plate in question, appears to show evidence of a perforation and that an Aberdeen Test Center ballistics subject matter expert found particles in the soft backing material behind the plate. Nevertheless, DOD did not concur with our finding because it asserted that no threads were broken on the first layer of Kevlar. However, as we stated in the report, the protocols define a complete penetration as having occurred when the projectile, fragment of the projectile, or fragment of the armor material is imbedded or passes into the soft under garment used behind the protective inserts plates, not when threads of the Kevlar are broken. The fragments found by the Aberdeen Test Center subject matter expert, as well as the three frayed, tattered, and separated Kevlar layers that we and Army testers observed, confirm our observations during testing. DOD also stated that the first layer of soft armor behind the plate under test serves as a witness plate during testing and if that first layer of soft armor is not penetrated, as determined by the breaking of threads on that first layer of soft armor, the test shot is not scored as a complete penetration in accordance with the PEO Soldier’s scoring criteria. We disagree with DOD’s position because the protocols do not require the use of a “witness plate” during testing to determine if a penetration occurred. If this shot would have been ruled a complete penetration rather than a partial penetration, this design would have accrued additional point deductions causing it to fail First Article Testing. DOD did not agree that the certification of the laser scanner was inadequate and made several statements in defense of both the laser and its certification. Among these is the fact that the laser removes the human factor of subjectively trying to find the deepest point, potentially pushing the caliper into the clay, and removing the need to use correction factors, all of which we agree may be positive things. However, we maintain that the certification of the laser was not adequately performed. As indicated in the certification letter, the laser was certified to a standard that did not meet the requirement of the testing protocols. Additionally, DOD stated that software modifications added to the laser after certification did not affect measurements; however, Army testers told us on multiple occasions that the modifications were designed to change the measurements reported by the laser. DOD added that the scanner does not artificially overstate back-face deformations and relies on the verified accuracy of the scanner and the study involving the scanning of clay replicas to support its claim. Based on our observations, the scanner was certified to the wrong standard and the certification study was not performed in the actual test environment using actual shots. DOD asserts that the scanner does not overstate back-face deformations and that it does not establish a new requirement. However, DOD cannot properly validate these assertions without a side-by-side comparison of the laser scanner and the digital caliper in their operational environment. Given the numerous issues regarding the laser and its certification, we maintain that its effect on First Article Testing should be examined by an external ballistics expert. DOD also stated that it did not agree with our finding that exposure of the clay backing to heavy rain on one day may have affected test results. DOD challenged our statistical analysis and offered its own statistical analysis as evidence that it was the poor designs themselves that caused unusual test results that day. We stand by our analysis, in combination with statements made by DOD and non-DOD officials with testing expertise and by the clay manufacturer, that exposure of the clay to constant, heavy cold rain may have had an effect on test results. Further, in analyzing the Army’s statistical analysis presented in DOD’s comments, we did not find this information to demonstrate that the designs were the factor in unusual test results that day or that the rain exposure could not have had an effect on the results. More detailed discussions of the Army’s analysis and our conclusions are provided in comments 13 and 24 of appendix II. DOD partially disagreed that the use of an additional series of clay calibration drops when the first series of drops were outside specifications did not meet First Article Test requirements and added that all clay used in testing passed the clay calibration in effect at the time. However, we witnessed several clay calibration drops that were not within specifications. These failed clay boxes were repaired, re-dropped, and either used if they passed the subsequent drop calibration series or discarded if they failed. The protocols only allow for one series of drops per clay box, which is the methodology that Army testers should have followed. DOD stated that NIJ standards do permit the repeating of failed calibration drops. However, our review of the NIJ standards reveals that there is no provision that allows repeat calibration drops. DOD states in its comments that NIJ standards are inappropriate for its test facilities, stating that these standards are insufficient for the U.S. Army given the expanded testing required to ensure body armor meets U.S. Army requirements. NIJ standards were not the subject of our review, but rather Aberdeen Test Center’s application of the Army’s current solicitation’s protocols during testing. Further, DOD acknowledged in its comments that National Institute of Standards and Technology officials recommended only one series of drops for clay calibration. However, DOD stated that it will partner with the National Institute of Standards and Technology to study procedures for clay calibration, to include repeated calibration attempts, and document any appropriate procedural changes, which we agree is a good step. Based on our analyses as described in our report and in our above responses to DOD’s comments, we believe there is sufficient evidence to raise questions as to whether the issues we identified had an impact on testing results. As a result, we continue to believe that it is necessary that DOD allow an independent external expert to review these test results and the overall effect of DOD’s deviations on those results before any armor is fielded to military personnel. Without such an independent review, it is our opinion that the First Article Testing results will remain questionable. Consequently, we have added a matter for congressional consideration to our report suggesting that Congress consider either directing DOD to require that an independent external review of these body armor test results be conducted or require that DOD officially amend its testing protocols to reflect any revised test procedures and repeat First Article Testing to ensure properly tested designs. DOD partially concurred with our third recommendation to determine whether those procedures that deviated from established testing protocols during First Article Testing should be continued during future testing and to change the established testing protocols to reflect those revised procedures. DOD recognized the need to update testing protocols and added that when the office of the Director of Operational Test and Evaluation promulgates standard testing protocols across DOD, these standards will address issues that we identified. As long as DOD specifically addresses all the inconsistencies and deviations that we observed prior to any future body armor testing, this would satisfy our recommendation. DOD stated that it partially concurs with our fourth recommendation to evaluate and recertify the accuracy of the laser scanner to the correct standard with all software modifications incorporated, based on the results of the independent expert review of the First Article Testing results. We also recommended that this process include a side-by-side comparison of the laser’s measurement of back-face deformations and those taken by digital caliper. DOD concurred with the concept of an independent evaluation, but it did not concur that one is needed in this situation because according to DOD its laser certification was sufficient. We disagree that the laser certification was performed correctly. As discussed in the body of our report and further in appendix II, recertification of the laser is critical because (1) the laser was certified to the wrong standard, (2) software modifications were added after the certification of the laser, and (3) these modifications did change the way the laser scanner measured back-face deformations. DOD did not explicitly state whether it concurred with our recommendation for a side- by-side comparison of the laser scanner and the digital caliper in their operational environment. We assert that such a study is important because without it the Army and DOD do not know the effect the laser scanner may have on the back-face deformation standard that has been used for many years and was established with the intention of being measured with a digital caliper. If the comparison reveals a significant difference between the laser scanner and the digital caliper, DOD and the Army may need to revisit the back-face deformation standard of its requirements with the input of industry experts and the medical community. DOD generally concurred with our fifth recommendation to conduct an independent evaluation of the Aberdeen Test Center’s testing protocols, facilities, and instrumentation and stated that such an evaluation would be performed by a team of subject matter experts that included both DOD and non-DOD members. We agree that in principal this approach meets the intent of our recommendation as long as the DOD members of the evaluation team are independent and not made up of personnel from those organizations involved in the body armor testing such as office of the Director of Operational Test and Evaluation, the Army Test and Evaluation Command, or PEO Soldier. DOD’s comments and our specific responses to them are provided in appendix II. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Secretary of the Army. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-8365 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our review of body armor testing focused on testing conducted by the Army in response to specific concerns raised by the House and Senate Armed Services Committees and multiple members of Congress. During our review, we were present during two rounds of testing of body armor designs that were submitted in response to a May 2007-February 2008 Army contract solicitation. The first round of testing, called Preliminary Design Model testing, was conducted from February 2008 through June 2008 with the objective of determining whether designs submitted under the contract solicitation met the required ballistic performance specifications and were eligible for contract award. The second round of testing, called First Article Testing, was conducted between November 2008 and December 2008 on the body armor designs that passed the Preliminary Design Model testing. Both tests were conducted at Aberdeen Proving Grounds in Aberdeen, Md., and were performed by Aberdeen Test Center. During the course of our review, we observed how the Army conducted its body armor testing and compared our observations with the established body armor testing protocols. We did not verify the accuracy of the Army’s test data and did not provide an expert evaluation of the results of testing. To understand the practices the Army used and the established testing protocols we were comparing the practices with, we met with and/or obtained data from officials from the Department of Defense (DOD) organizations and the industry experts listed in table 1: To determine the degree to which the Army followed established testing protocols during the Preliminary Design Model testing of body armor designs, we were present and made observations during the entire period of testing, compared our observations with established testing protocols, and interviewed numerous DOD and other experts about body armor testing. We observed Army testers as they determined whether designs met the physical and ballistics specifications described in the contract solicitation, and as encouraged by Aberdeen Test Center officials, we observed the ballistics testing from inside a viewing room equipped with video and audio connections to the firing lanes. We also were present and observed the physical characterization of the test items and visited the environmental conditioning chambers, the weathering chamber, and the X- ray facility. We were at Aberdeen Test Center when the designs were delivered for testing on February 7, 2008, and were on-site every day of physical characterization, which comprises the steps performed to determine whether each design meets the required weight and measurement specifications. We systematically recorded our observations of physical characterization on a structured, paper data-collection instrument that we developed after consulting with technical experts from Program Executive Office (PEO) Soldier before testing started. We were also present for every day except one of the ballistics testing, observing and collecting data on approximately 80 percent of the tests from a video viewing room that was equipped with an audio connection to each of the three firing lanes. To gather data from the day that we were not present to observe ballistic testing, we viewed that day’s testing on video playback. We systematically recorded our observations of ballistics testing using a structured, electronic data-collection instrument that we developed to record relevant ballistic test data—such as the shot velocity, penetration results, and the amount of force absorbed (called “back-face deformation”) by the design tested. Following testing, we supplemented the information we recorded on our data collection instrument with some of the Army’s official test data and photos from its Vision Digital Library System. We developed the data collection instrument used to collect ballistics testing data by consulting with technical experts from Program Executive Office Soldier and attending a testing demonstration at Aberdeen Test Center before Preliminary Design Model testing began. After capturing the Preliminary Design Model testing data in our data collection instruments, we compared our observations of the way the Aberdeen Test Center conducted testing with the testing protocols that Army officials told us served as the testing standards at the Aberdeen Test Center. According to these officials, these testing protocols comprised the (1) test procedures described in the contract solicitation announcement’s purchase descriptions and (2) Army’s detailed test plans and Test Operating Procedure that serve as guidance to the Aberdeen Test Center testers and that were developed by the Army Test and Evaluation Command and approved by Program Executive Office Soldier, the office of the Director of Operational Test and Evaluation, the Army Research Labs, and cognizant Army components. We also reviewed National Institute of Justice testing standards because Aberdeen Test Center officials told us that, although Aberdeen Test Center is not a National Institute of Justice-certified testing facility, they have made adjustments to their procedures based on those standards and consider them when evaluating Aberdeen Test Center’s test practices. Regarding the edge shot locations for the impact test samples, we first measured the area of intended impact on an undisturbed portion of the test item on all 56 test samples after the samples had already been shot. The next day we had Aberdeen Test Center testers measure the area of intended impact on a random sample of the impact test samples to confirm our measurements. Throughout testing we maintained a written observation log and compiled all of our ballistic test data into a master spreadsheet. Before, during, and after testing, we interviewed representatives from numerous Army entities—including the Assistant Secretary of the Army for Acquisition, Technology and Logistics; Aberdeen Test Center; Developmental Test Command; Army Research Laboratories; and Program Executive Office Soldier—and also attended Integrated Product Team meetings. To determine the degree to which the Army followed established testing protocols during First Article Testing of the body armor designs that passed Preliminary Design Model testing, we were present and made observations during the entire period of testing, compared our observations with established testing protocols, and interviewed numerous DOD and industry experts about body armor testing. As during Preliminary Design Model testing, we observed Army testers as they determined whether designs met the physical and ballistics specifications described in the contract solicitation. However, different from our review of Preliminary Design Model testing, we had access to the firing lanes during ballistic testing. We also still had access to the video viewing room used during Preliminary Design Model testing, so we used a bifurcated approach of observing testing from both the firing lanes and the video viewing room. We were present for every day except one of First Article Testing—from the first day of ballistics testing on November 11, 2008, until the final shot was fired on December 17, 2008. We noted the weights and measures of plates during physical characterization on the same data collection instrument that we used during Preliminary Design Model testing. For the ballistics tests, we revised our Preliminary Design Model testing data collection instrument so that we could capture data while in the firing lane—data that we were unable to confirm first hand during Preliminary Design Model testing. For example, we observed the pre-shot measurements of shot locations on the plates and the Aberdeen Test Center’s method for recording data and tracking the chain of custody of the plates; we also recorded the depth of the clay calibration drops (the series of pre-test drops of a weight on clay that is to be placed behind the plates during the shots), the temperature of the clay, the temperature and humidity of the firing lane, the temperatures in the fluid soak conditioning trailer, and the time it took to perform tests. We continued to record all of the relevant data that we had recorded during Preliminary Design Model testing, such as the plate number, type of ballistic subtest, the charge weight of the shot, the shot velocity, the penetration results, and the back- face deformation. Regarding the new laser arm that Aberdeen Test Center acquired to measure back-face deformation during First Article Testing, we attended a demonstration of the arm’s functionality performed by Aberdeen Test Center and also acquired documents related to the laser arm’s certification by Army Test, Measurement, and Diagnostic Equipment activity. With a GAO senior methodologist and a senior technologist, we made observations related to Aberdeen Test Center’s methods of handling and repairing clay, calibrating the laser guide used to ensure accurate shots, and measuring back-face deformation. Throughout testing we maintained a written observation log and compiled all of our ballistic test data into a master spreadsheet. Following testing, we supplemented the information we recorded on our data collection instrument with some of the Army’s official test data and photos from its Vision Digital Library System to complete our records of the testing. After capturing the testing data in our data collection instruments, we compared our observations of the way Aberdeen Test Center conducted testing with the testing protocols that Army officials told us served as the testing standards at the Aberdeen Test Center. In analyzing the potential impact of independent variables on testing, such as the potential impact of the November 13th rain on the clay, we conducted statistical tests including chi-square and Fisher’s Exact Test methods to accommodate small sample sizes. Before, during, and after testing, we interviewed representatives from numerous Army agencies, including Aberdeen Test Center, Developmental Test Command, Army Research Laboratories, and Program Executive Office Soldier. We also spoke with vendor representatives who were present and observing the First Article Testing, as well as with Army and industry subject matter experts. We conducted this performance audit from July 2007 through October 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. 1. The Department of Defense (DOD) stated that undertakings of this magnitude are not without flaws and that what was most important was fielding body armor plates that defeated the threat. While DOD may have identified some flaws that may not be serious enough to call the testing results into question, several of the deviations to the testing protocols that we observed do call the testing results into question for the reasons stated in our report. An independent expert has not evaluated the impact of these deviations on the test results and, until such a study is conducted, DOD cannot be assured that the plates that passed testing can defeat the threat. DOD also noted several actions DOD and the Army have taken to improve procedures associated with body armor testing. Our responses to these actions are included in comments 2 through 6. 2. The office of the Director of Operational Test and Evaluation’s efforts to respond to members of the Armed Services Committees and to address issues raised by the Department of Defense Inspector General were outside the scope of our audit. Therefore, we did not validate the implementation of the actions DOD cited or evaluate their effectiveness in improving test procedures. With regard to the office of the Director of Operational Test and Evaluation’s establishing a policy to conduct First Article Testing at government facilities, using a government facility to conduct testing may not necessarily produce improved test results. 3. Regarding the office of the Director of Operational Test and Evaluation’s oversight of testing, the office of the Director of Operational Test and Evaluation led the Integrated Product Team and approved the test plans. However, while we were present at the Aberdeen Test Center during Preliminary Design Model testing and First Article Testing, we did not observe on-site monitoring of the testing by the office of the Director of Operational Test and Evaluation staff beyond incidental visits during VIP events and other demonstrations. 4. Regarding the procedures and policies DOD stated were implemented by the Army Test and Evaluation Command to improve testing: Only two of the test ranges were completed prior to Preliminary Design Model testing. Two additional test ranges were completed after Preliminary Design Model testing. Regarding the certification of the laser scanner measurement device, as noted in our report, the Army had not adequately certified that it was an appropriate tool for body armor testing (see our comment 12). The Army’s Test Operating Procedure was not completed or implemented until after Preliminary Design Model testing. New clay conditioning chambers inside each test range were not constructed until after all testing was completed (see our comment 13). The improved velocity measurement accuracy study was not conducted until after all testing was completed. Regarding the implementation of electronic data collection and processing for body armor testing, as stated in our report, we observed that not all data are electronically collected. Many types of data are manually collected and are later converted to electronic data storage. 5. Regarding Program Executive Office (PEO) Soldier’s efforts to improve the acquisition of personal protection equipment: The contract solicitation allowed all prospective body armor manufacturers to compete for new contracts. We observed that PEO Soldier did transfer expertise and experience to support Army Acquisition Executive direction that all First Article Testing and lot-acceptance testing be conducted by the Army Test and Evaluation Command. The task force that focused on soldier protection was not initiated until February 2009, after all Preliminary Design Model testing and First Article Testing was completed. According to Army officials, PEO Soldier instituted a non-destructive test capability that became operational after Preliminary Design Model testing, but prior to First Article Testing. PEO Soldier’s personal protection evaluation process was described in our previous report—GAO-07-662R. Although we recognized the strength of PEO Soldier’s personal protection evaluation process in our earlier report, not all the protections that were in place at that time remain in place. For example, the requirement that testing be conducted at a National Institute of Justice (NIJ)-certified facility was waived. 6. DOD stated that many of the actions by Army Test and Evaluation Command and PEO Soldier were initiated and improved upon during the course of our review. However, as discussed above, several of these actions were initiated before and during testing, but many of them were not completed until after testing was completed. 7. DOD and the Army stated that Preliminary Design Model testing had achieved its objective to identify those vendor designs that met the performance objectives stated in PEO Soldier’s purchase description and that “it is incorrect to state that ‘at least two’ of the preliminary design models should have failed as they passed in accordance with the modified solicitation.” We disagree with these statements. As stated in our report, the most consequential of the deviations from testing protocols we observed involved the measurement of back-face deformation, which did affect final test results. According to original testing protocols, back-face deformation was to be measured at the deepest point of the depression in the clay backing. This measure indicates the most force that the armor will allow to be exerted on an individual struck by a bullet. According to Army officials, the deeper the back-face deformation measured in the clay backing, the higher the risk of internal injury or death. DOD and the Army now claim that these solutions passed in accordance with the modified solicitation, which overlooks the fact that the reason the solicitation had to be modified was that Army testers deviated from the testing protocols laid out in the purchase descriptions and did not measure back-face deformation at the deepest point. DOD and the Army also stated in their response that they decided to use the point of aim because they determined it was an accurate and repeatable process. Yet in DOD’s detailed comments regarding edge shot locations, DOD acknowledged that there were “potential variances between the actual aim point and impact point during testing.” Army Research Laboratory and NIJ-certified laboratories use the benchmark process of measuring back-face deformation at the deepest point, not at the point of aim. As set forth in our report, at least two solutions passed Preliminary Design Model testing that would have failed if back-face deformation had been measured to the deepest point. This statement came directly from Aberdeen Test Center officials during a meeting in July 2008, where they specifically told us which two solutions would have failed. We said “at least” two because Army testers did not record deepest point back-face deformation data for the first 30 percent of testing, and therefore there could be more solutions that would have failed had the deepest point been measured during this first portion of the test. Because the Army did not measure back-face deformation to the deepest point, it could not identify whether these two solutions in particular and all the solutions in general met performance requirements. As a result, Army could not waive First Article Testing for successful candidates and was forced to repeat the test to ensure that all solutions did indeed meet requirements. By repeating testing, the Army incurred additional expense and further delays in fielding armor from this solicitation to the soldiers. During the course of our audit, the Army also acknowledged that the Preliminary Design Model testing did not meet its objective because First Article Testing could not be waived without incurring risk to the soldiers. DOD and the Army stated that, upon discovery of the back-face deformation deviation from the testing protocols described in the purchase descriptions, the Army stopped testing. The Army’s Contracting Office was informed of this deviation through a series of questions posed by a vendor who was present at the Vendor Demonstration Day on February 20, 2008. This vendor sent questions to the Contracting Office on February 27 asking whether testers were measuring at the aim point or at the deepest point. This vendor also raised questions about how damage to the soft pack would be recorded and about the location of edge shots. Based on our observations, all of these questions involved issues where Army testers deviated from testing protocols and are discussed in our responses to subsequent comments. The Army did not respond until March 19 and replied that its test procedures complied with solicitation requirements. It was not until Army leadership learned of the vendor’s questions and of the deviation in measuring back-face deformation that testing was finally halted on March 27, a full month after the issue came to the Army Test and Evaluation Command’s attention. 8. DOD stated that in 2007, prior to the initiation of Preliminary Design Model testing, the Army Test and Evaluation Command, the office of the Director of Operational Test and Evaluation, and Army leadership all agreed that First Article Testing would be conducted as part of the Army’s body armor testing. However, DOD did not provide any documentation dated prior to April 2008—that is, prior to the discovery of the back-face deformation deviation—that suggested that DOD intended to conduct First Article Testing following Preliminary Design Model testing. In July 2008, the Army Test and Evaluation Command and PEO Soldier stated in official written responses to our questions regarding Preliminary Design Model testing that the conduct of First Article Testing became essential following Preliminary Design Model testing because of the Army’s measuring back- face deformation at the point of aim as opposed to at the deepest point of deformation. In fact, because of this deviation, DOD could not waive First Article Testing as originally planned and was forced to conduct subsequent tests to verify that the designs that had passed Preliminary Design Model testing met testing requirements. DOD asserted that a multi- phase concept including Preliminary Design Model testing, First Article Testing, and extended ballistic testing to support the development of an improved test standard was briefed to a congressional member and professional staff on November 14, 2007. We were present at this November 14 test overview and strategy/schedule briefing and noted that it did not include plans for First Article Testing to be performed in addition to Preliminary Design Model testing. Excerpts from the slides briefed that day showed Preliminary Design Model (Phase 1) testing and a subsequent ballistic and suitability testing (Phase 2). As indicated in the slides (see fig. 7 and fig. 8) from that November 14 briefing, the Phase 2 test was designed to test the form, fit, and function of those solutions that had passed Preliminary Design Model testing as well as the ballistic statistical confidence tests. According to information we obtained, Phase 2 was never intended to be First Article Testing and was to have no impact on whether or not a solution received a contract. It was not until after the back-face deformation deviation was discovered that briefing slides and other documentation on test plans and schedules started describing First Article Testing as following Preliminary Design Model testing. For example, as stated by DOD in its comments, the October 2008 briefing to a congressional member and professional staff clearly showed First Article Testing as following Preliminary Design Model testing (Phase 1) and preceding Phase 2. Therefore, it is not clear why DOD’s test plan briefings would make no mention of a First Article Testing prior to the back-face deformation measurement deviation while including First Article Testing in subsequent briefings if the plan had always been to conduct both Preliminary Design Model testing and First Article Testing. Furthermore, it is not clear why DOD would intentionally plan at the start of testing to repeat Preliminary Design Model testing (which was supposed to be performed in accordance with the First Article Testing protocol) with an identical test (First Article Testing) given that it has been the Army’s practice to use such Preliminary Design Model testing to meet First Article Testing requirements – a practice that was also supported by the DOD Inspector General and the Army Acquisition Executive after an audit of the Army’s body armor testing program. DOD also stated that First Article Testing waivers were not permitted under the body armor solicitation. However, the solicitation and its amendments are unclear as to whether waivers of First Article Testing would be permitted. Nonetheless, in written answers to questions we posed to the Army in July 2008, the Army Test and Evaluation Command and PEO Soldier in a combined response stated that due to the fact that back-face deformation was not measured to the deepest point of penetration during Phase I tests, there would be no waivers of First Article Testing after the contract award. DOD also stated that it and the Army concluded that First Article Testing had achieved its objective of verifying that contracted vendors could produce, in a full-rate capacity, plates that had passed Preliminary Design Model testing. DOD further stated that it is incorrect to say that First Article Testing did not meet its objective and it is incorrect to assert that three of five vendor designs should have failed First Article Testing. However, our analysis showed that two solutions that passed First Article Testing would have failed if back-face deformations had not been rounded and had been scored as they were during Preliminary Design Model testing. The third solution that passed would have failed if Army testers had correctly scored a shot result as a complete penetration in accordance with the definition of a complete penetration in the purchase description, rather than as a partial penetration. Because questions surround these scoring methods and because DOD and the Army cannot confidently identify whether these vendors can mass produce acceptable plates, we restate that First Article Testing may not have achieved its objective. See comments 12, 10, and 11 regarding DOD’s statements about the certification of the laser scanning equipment, the rounding of back-face deformations, and the Aberdeen Test Center’s scoring procedures, respectively. We agree with DOD that an open dialog with the DOD Inspector General, external test and technology experts, and us will improve the current body armor testing. However, we disagree with DOD’s statement that NIJ- certified laboratories lack expertise to provide reliable information on body armor testing issues. Before the current solicitation, the Army relied on these NIJ-certified laboratories for all body armor source selection and lot acceptance tests. The Marine Corps also conducts source selection tests at these facilities. As these independent laboratories have performed numerous tests for the Army conducted in accordance with First Article Testing protocol, we assert that the credentials of these laboratories warrant consideration of their opinions on body armor testing matters. 9. DOD did not concur with our recommendation for an independent evaluation of First Article Testing results before any armor is fielded to soldiers because the First Article Testing achieved its objectives. We disagree with DOD’s position that First Article Testing and Preliminary Design Model testing achieved their objectives because we found numerous deviations from testing protocols that allowed solutions to pass testing that otherwise would have failed. Due to these deviations, the majority of which seem to make the testing easier to pass and favor the vendors, we continue to believe that it is necessary to have an independent external expert review the results of First Article Testing and the overall effect of DOD’s deviations on those results before the plates are fielded. An independent observer, external to DOD, is best suited to determine the overall impact of DOD’s many deviations during the testing associated with this solicitation. Consequently, we have added a matter for Congress to consider directing DOD to either conduct this external review or direct that DOD officially amend its testing protocols to reflect any revised test procedures and repeat First Article Testing. 10. DOD did not concur with our recommendation that the practice of rounding down back-face deformations should be reviewed by external experts because the practice has been used historically by NIJ-certified laboratories. Although DOD acknowledged that the practice of rounding is not adequately described in the testing protocols, it stated that rounding is permitted under American Society for Testing and Materials (ASTM) E-29. The purchase descriptions (attachments 01 and 02 of the solicitation) referenced five ASTM documents, but ASTM E-29 is not referenced and therefore is not part of the protocol. The detailed test plans state that solutions shall incur a penalty on deformations greater than 43 millimeters, and the Army is correct that neither the purchase description nor the detailed test plans provide for rounding. During Preliminary Design Model testing, Army testers measured back-face deformations to the hundredths place and did not round. Any deformation between 43.00 and 43.50 received a penalty. During First Article Testing, deformations in this range were rounded down and did not incur a penalty, so the decision to round effectively changed the standard in favor of the vendors. Two solutions passed First Article Testing that would have failed if back-face deformations had been scored without rounding as they were during Preliminary Design Model testing. We recognize that there are other factors, such as the fact that the new laser scanner may overstate back- face deformations that might justify the decision to round down back-face deformations. However, as a stand-alone event, rounding down deformations did change the standard in the middle of the solicitation between Preliminary Design Model testing and First Article Testing. That is why it is important for an independent external expert to review the totality of the test and the Army’s deviations from testing protocols to determine the actual effect of this and other deviations. 11. Regarding the incorrect scoring of a complete penetration as a partial penetration, DOD stated that the first layer of soft armor behind the plate serves as a witness plate during testing. If that first layer of soft armor is not penetrated, as determined by the breaking of threads on that first layer of soft armor, the test shot is not scored as a complete penetration in accordance with the PEO Soldier’s scoring criteria. However, DOD’s position is not consistent with the established testing protocols as evidenced by the following: (1) We did not observe the use of and the testing protocols do not require the use of a witness plate during testing to determine if a penetration occurred; and (2) The testing protocols do not state that “the breaking of threads” is the criterion for determining a penetration. The language of the testing protocols, not undocumented criteria, should be used in scoring and determining penetration results. The criteria for scoring a penetration are found in the current solicitation’s protocols. Paragraph 6.6, of each of the purchase descriptions state, under “Definitions: Complete Penetration (CP) for Acceptance Testing-- Complete penetrations have occurred when the projectile, fragment of the projectile, or fragment of the armor material is imbedded or passes into the soft under garment used behind the protective inserts plates” (ESAPIs or XSAPIs). Our multiple observations and thorough inspection of the soft armor in question revealed that black-grayish particles had penetrated at least three Kevlar layers as evidenced by their frayed, fuzz-like and separated appearance to the naked eye. The black-grayish particles were stopped by the fourth Kevlar layer. DOD acknowledged that figure 6 of our report appears to show evidence of a perforation on the rear of the test plate in question and that the Aberdeen Test Center’s subject matter expert found dust particles. These particles are fragments of the projectile or fragments of the armor material that were imbedded and indeed passed into the soft undergarment used behind the protective insert; therefore, the shot should have been ruled a complete penetration according to the testing protocols, increasing the point penalties and causing the design to fail First Article Testing. DOD’s comments stated that we acknowledged there were no broken threads on the first layer of the soft armor. We made no such comment and this consideration is not relevant as the requirement for broken fibers is not consistent with the written testing protocols as we have stated. Of consequence, DOD and Army officials acknowledged that the requirement for broken fibers was not described in the testing protocols or otherwise documented. In addition to the DOD acknowledgement that an Aberdeen Test Center subject matter expert found particles on the soft body armor, more convincing evidence is the picture of the subject plate. Figure 6 of our report clearly shows the tear in the fibers that were placed behind the plate in question allowing the penetration of the particles found by the Aberdeen Test Center subject matter expert. These particles can only be fragments of the projectile or fragments of the armor material that passed into the soft under garment used behind the protective inserts (plates), confirming our observations of the event and the subsequent incorrect scoring. The shot should have been scored a complete penetration, and the penalty incurred would have caused the design in question to fail First Article Testing. 12. DOD did not concur with our recommendation that the use of the laser scanner needs to be reviewed by experts external to DOD due to the lack of a full evaluation of the scanner’s accuracy to measure back-face deformations, to include an evaluation of the software modifications and operation under actual test conditions. DOD asserted that the laser scanner measurement device provides a superior tool for providing accurate, repeatable, defensible back-face deformation measurements to the deepest point of depression in the clay. We agree that once it is properly certified, tested, and evaluated, the laser may eliminate human errors such as incorrectly selecting the location of the deepest point or piercing the clay with the sharp edge of the caliper and making the depression deeper. However, as we stated, the Army used the laser scanner as a new method to measure back-face deformation without adequately certifying that the scanner could function: (1) in its operational environment, (2) at the required accuracy, (3) in conjunction with its software upgrades, and (4) without overstating deformation measurements. DOD asserted that the software upgrades did not affect the measurement system of the laser scanner and that these software changes had no effect on the physical measurement process of the back-face deformation measurement that was validated through the certification process. The software upgrades were added after the certification and do include functions to purposely remove spikes and other small crevices on the clay and a smoothing algorithm that changed back-face deformation measurements. We have reviewed these software functions and they do in fact include calculations that change the back-face deformation measurement taken. Furthermore, Army officials told us that additional upgrades to the laser scanner were made after First Article Testing by Aberdeen Test Center to correct a software laser malfunction identified during the subsequent lot acceptance testing of its plates. According to these officials, this previously undetected error caused an overstatement of the back-face deformation measurement taken by several millimeters, calling into question all the measurements taken during First Article Testing. Also, vendors have told us that they have conducted several studies that show that the laser scanner overestimates back-face deformation measurements by about 2 millimeters as compared with measurements taken by digital caliper, thereby over-penalizing vendors’ designs and causing them to fail lot acceptance testing. Furthermore, the laser scanner was certified to an accuracy of 1.0 millimeters, but section 4.9.9.3 of the purchase descriptions requires a device capable of measuring to an accuracy of ±0.1 millimeters. Therefore, the laser does not meet this requirement making the certification invalid. The laser scanner is an unproven measuring device that may reflect a new requirement because the back-face deformation standards are based on measurements obtained with a digital caliper. This raises concerns that results obtained using the laser scanner may be more inconsistent than those obtained using the digital caliper. As we stated in the report, the Aberdeen Test Center has not conducted a side-by-side test of the new laser scanner used during First Article Testing and the digital caliper previously used during Preliminary Design Model testing. Given the discrepancies on back-face deformation measurements we observed and the overstating of the back- face deformation alleged by the vendors, the use of the laser is still called into question. Thus, we continue to support our recommendation that experts independent of DOD review the use of the laser during First Article Testing and that a full evaluation of the laser scanner is imperative to ensure that the tests are repeatable and can be relied upon to certify procurement of armor plates for our military personnel based on results of body armor testing at the Aberdeen Test Center using the laser scanner. Lastly, DOD stated that the laser scanner is used by the aeronautical industry; however, the Army Test and Evaluation Command officials told us that the scanner had to be customized for testing through various software additions and mounting customizations to mitigate vibrations and other environmental factors. These software additions and customizations change the operation of the scanner. 13. DOD does not concur with our recommendation that experts examine, among other items, “the exposure of clay backing material to rain and other outside environmental conditions as well as the effect of high oven temperatures during storage and conditioning,” because it believes that such conditions had no impact upon First Article Testing results. As detailed in the report, we observed these conditions at different points throughout the testing period. Major variations in materials preparation and testing conditions such as exposure to rain and/or violations of testing protocols merit consideration when analyzing the effectiveness and reliability of First Article Testing. As one specific example, we described in this report statistically significant differences between the rates of failure in response to one threat on November 13 and the failure rates on all other days of testing but do not use the statistical analysis as the definitive causal explanation for such failure. We observed one major environmental difference in testing conditions that day, the exposure of temperature-conditioned clay to heavy, cold rain in transit to the testing site. After experts confirmed that such variation might be one potential factor relating to overall failure rates on that day, we conducted statistical tests to assess whether failures rates were different on November 13 compared to other dates.. Our assertion that the exposure of the clay to rain may have had an impact on test results is based not solely on our statistical analysis of test results that day; rather, it is also based on our conversations with industry experts, including the clay manufacturer, and on the fact that we witnessed an unusually high number of clay calibration failures during testing that comprised plate designs of multiple vendors, not just the one design that DOD points to as the source for the high failure rate. We observed that the clay conditioning trailer was located approximately 25 feet away from the entrance to the firing lane. The clay blocks, weighing in excess of 200 lbs., were loaded face up onto a cart and then a single individual pulled the cart over approximately 25 feet of gravel to the firing lane entrance. Once there, entry was delayed because the cart had to be positioned just right to get through the firing lane door. Army testers performed all of this without covering the clay to protect it from the rain and the cold, and once inside the clay had significant amounts of water collected on it. With respect to the unusually high number of clay calibration failures on November 13, there were seven clay calibration drops that were not within specifications. Some of these failed clay boxes were discarded in accordance with the testing protocols; however, others were repaired, re- dropped, and used if they had passed the second drop series. These included one plate that was later ruled a no-test and three plates for which the first shot yielded a catastrophic back-face deformation. These were the only three first-shot catastrophic back-face deformations during the whole test, and they all occurred on the same rainy day and involved two different solutions, not just the one that DOD claims performed poorly. The failure rates of plates as a whole, across all plate designs, were very high this day, and the failures were of both the complete penetration and the back-face deformation variety. Water conducts heat approximately 25 times faster than air, which means the water on the surface cooled the clay considerably faster than the clay would have cooled by air exposure alone. Moreover, Army testers lowered the temperature of the clay conditioning trailers during testing on November 13 and told us that the reason was that the ovens and clay were too hot. This is consistent with what Army subject matter experts and other industry experts told us— that the theoretical effect of having cold rain collecting on hot clay may create a situation where the clay is more susceptible to both complete penetrations because of the colder, harder top layer and to excessive back-face deformations because of the overheated, softer clay beneath the top layer. Finally, the clay manufacturer told us that, although this is an oil-based clay, water can affect the bonding properties of the clay, making it more difficult for wet clay to stick together. This is consistent with what we observed on November 13. After the first shot on one plate, as Army testers were removing the plate from the clay in order to determine the shot result, we observed a large chunk of clay fall to the floor. This clay was simply swept off to the side by the testers. In another instance, as testers were repairing the clay after the calibration drop, one of the testers pulled a long blade over the surface of the clay to smooth it. When he hit the spot where one of the calibration drops had occurred and the clay had been repaired, the blade pulled up the entire divot and the testers had to repair the clay further. Regarding our use of no-test data, we were strict in the instances where we used this data, see our comment 24. DOD stated that it was the poor performance of one solution in particular that skewed the results for this day and that this solution failed 70 percent of its shots against Threat D during First Article Testing. DOD’s statistic is misleading. This solution failed 100 percent of its shots (6 of 6) on November 13, but only 50 percent for all other test days (7 of 14). Also, the fact that this solution managed to pass the Preliminary Design Model testing but performed so poorly during First Article Testing raises questions about the repeatability of DOD’s and the Army’s test practices. Finally, DOD’s own analysis confirms that two of the four solutions tested on November 13 performed at their worst level in the test on that day. If the one solution whose plate was questionably ruled a no-test on this day is included in the data, then three of the four solutions performed at their worst level in the test on this day. DOD said that after testing Aberdeen Test Center completed the planned installation of new clay conditioning chambers inside the test ranges precluding any external environmental conditioning interacting with the clay. We believe it is a step in the right direction that the Aberdeen Test Center has corrected this problem for future testing, but we continue to believe that an external entity needs to evaluate the impact of introducing this new independent variable on this day of First Article Testing. 14. DOD concurred that it should establish a written standard for conducting clay calibration drops but non-concurred that failed blocks were used during testing. DOD asserted that all clay backing material used during testing passed the calibration drop test prior to use. We disagree with this position because the calibration of the clay required by the testing protocols calls for “a series of drops,” meaning one series of three drops, not multiple series of three drops as we observed on various occasions. DOD stated that, as a result of our review and the concerns cited in our report, the Aberdeen Test Center established and documented a revised procedure stating that only one repeat of calibration attempt can be made and, if the clay does not pass calibration upon the second attempt, it is reconditioned for later use and a new block of clay is substituted for calibration. Based on the testing protocols, this is still an incorrect procedure to ensure the proper calibration of the clay prior to shooting. The testing protocols do not allow for a repeat series of calibration drops. DOD also says that, upon completion of testing under the current Army solicitation and in coordination with the National Institute of Standards and Technology, the office of the Director of Operational Test and Evaluation and the Army will review the procedures for clay calibration to include repeated calibration attempts and will document any appropriate procedural changes. DOD goes on to say that the NIJ standard as verified by personnel at the National Institute of Standards and Technology does not address specifically the issue of repeating clay calibration tests. However, the Aberdeen Test Center’s application of the Army’s current solicitation’s protocols during testing, and not the NIJ standards, was the subject of our review. In its comments, DOD acknowledged that the National Institute of Standards and Technology officials recommend only one series of drops for clay calibration, but the Aberdeen Test Center did multiple drops during testing. We are pleased that DOD has agreed to partner with the National Institute of Standards and Technology to conduct experiments to improve the testing community’s understanding of clay performance in ballistic testing, but these conversations and studies in our opinion should have occurred prior to testing, not after, as this deviation from testing protocols calls the tests results into question. We reassert that an external entity needs to evaluate the impact of this practice on First Article Testing results. 15. DOD partially concurred with our recommendation and agreed that inconsistencies were identified during testing; however, DOD asserted that the identified inconsistencies did not alter the test results. As stated in our response to DOD’s comments on our first recommendation, we do not agree. Our observations clearly show that (1) had the deepest point been used during Preliminary Design Model testing, two designs that passed would have failed and (2) had the Army not rounded First Article Testing results down, two designs that passed would have failed. Further, if the Army had scored the particles (which in their comments to this report DOD acknowledges were imbedded in the shoot pack behind the body armor) according to the testing protocols, a third design that passed First Article Testing would have failed. In all, four out of the five designs that passed Preliminary Design Model testing and First Article Testing would have failed if testing protocols had been followed. 16. DOD partially concurred with our recommendation that, based on the results of the independent expert review of the First Article Testing results, it should evaluate and recertify the accuracy of the laser scanner to the correct standard with all software modifications incorporated and include in this analysis a side-by-side comparison of the laser measurements of the actual back-face deformations with those taken by digital caliper to determine whether laser measurements can meet the standard of the testing protocols. DOD maintains that it performed an independent certification of the laser measurement system and process and that the software changes that occurred did not affect the measurement system in the laser scanner. However, as discussed in comment 12, we do not agree that an adequate, independent certification of the laser measurement system and process was conducted. Based on our observations, we continue to assert that the software changes added after certification did affect the measurement system in the laser. 17. DOD partially concurred with our recommendation for the Secretary of the Army to provide for an independent peer review of the Aberdeen Test Center’s body armor testing protocols, facilities, and instrumentation. We agree that a review conducted by a panel of external experts that also includes DOD members could satisfy our recommendation. However, to maintain the independence of this panel, the DOD members should not be composed of personnel from those organizations involved in the body armor testing (such as the office of the Director of Operational Test and Evaluation, the Army Test and Evaluation Command, or PEO Soldier. 18. DOD stated that Aberdeen Test Center had been extensively involved in body armor testing since the 1990s and has performed several tests of body armor plates. We acknowledge that Aberdeen Test Center had conducted limited body armor testing for the initial testing on the Interceptor Body Armor system in the 1990s and have clarified the report to reflect that. However, as acknowledged by DOD, Aberdeen Test Center did not perform any additional testing on that system for PEO Soldier since the 1990s and this lack of experience in conducting source selection testing for that system may have led to the misinterpretations of testing protocols and deviations noted on our report. According to a recent Army Audit Agency report, NIJ testing facilities conducted First Article Testing and lot acceptance testing for the Interceptor Body Armor system prior to this current solicitation. Another reason Aberdeen Test Center could not conduct source selection testing was that in the past Aberdeen Test Center lacked a capability for the production testing of personnel armor systems in a cost-effective manner; the test facilities were old and could not support test requirements for a temperature- and humidity-controlled environment and could not provide enough capacity to support a war- related workload. The Army has spent about $10 million over the last few years upgrading the existing facilities with state-of-the-art capability to support research and development and production qualification testing for body armor, according to the Army Audit Agency. Army Test and Evaluation Command notes that there were several other tests between 1997 and 2007, but according to Army officials these tests were customer tests not performed in accordance with a First Article Testing protocol. For example, the U.S. Special Operations Command test completed in May 2007 and cited by DOD was a customer test not in accordance with First Article Testing protocol. The Aberdeen Test Center built new lanes and hired and trained contractors to perform the Preliminary Design Model testing and First Article Testing. 19. DOD stated that, to date, it has obligated about $120 million for XSAPI and less than $2 million for ESAPI. However, the value of the 5-year indefinite delivery/indefinite quantity contracts we cited is based on the maximum amount of orders of ESAPI/XSAPI plates that can be purchased under these contracts. Given that the Army has fulfilled the minimum order requirements for this solicitation, the Army could decide to not purchase additional armor based on this solicitation and not incur almost $7.9 billion in costs. DOD stated in its response that there are only three contracts. However, the Army Contracting Office told us that there were four contracts awarded and provided those contracts to us for our review. Additionally, we witnessed four vendors participating in First Article Testing, all of which had to receive contracts to participate. It is unclear why the Army stated that there were only three contracts. 20. DOD is correct that there is no limit or range specified for the second shot location for the impact subtest. However, this only reinforces that the shot should have been aimed at 1.5 inches, not at 1.0 inch or at various points between 1.0 inch and 1.5 inches. It also does not explain why the Army continued to mark plates as though there were a range for this shot. Army testers would draw lines at approximately 0.75 inches for the inner tolerance and 1.25 inches for the outer tolerance of ESAPI plates. They drew lines at approximately 1.0 inch for the inner tolerance and 1.5 inches for the outer tolerance of XSAPI plates. We measured these lines for every impact test plate and also had Army testers measure some of these lines to confirm our measurements. We found that of 56 test items, 17 were marked with shot ranges wholly inside of 1.5 inches. The ranges of 30 other test items did include 1.5 inches somewhere in the range, but the center of the range (where Army testers aimed the shot) was still inside of 1.5 inches. Only four test items were marked with ranges centered on 1.5 inches. DOD may be incorrect in stating that shooting closer to the edge would have increased the risk of a failure for this subtest. For most subtests this may be the case, but according to Army subject matter experts the impact test is different. For the impact test, the plate is dropped onto a concrete surface, striking the crown (center) of the plate. The test is to determine if this weakens the structural integrity of the plate, which could involve various cracks spreading from the center of the plate outward. The reason the requirement for this shot on this subtest is written differently (i.e., to be shot at approximately 1.5 inches from the edge, as opposed to within a range between 0.75 inches and 1.25 inches or between 1.0 inches and 1.5 inches on other subtests) is that it is meant to test the impact’s effect on the plate. For this subtest and this shot, there may actually be a higher risk of failure the closer to the center the shot occurs. PEO Soldier representatives acknowledged that the purchase descriptions should have been written more clearly and changed the requirement for this shot to a range of between 1.5 inches and 2.25 inches during First Article Testing. We confirmed that Army testers correctly followed shot location testing protocols during First Article Testing by double-checking the measurements on the firing lane prior to the shooting of the plate. We also note that, although DOD stated the Preliminary Design Model testing shot locations for the impact test complied with the language of the testing protocols, under the revised protocol used during First Article Testing several of these Preliminary Design Model testing impact test shot locations would not have been valid. DOD stated that there was no impact on the outcome of the test, but DOD cannot say that definitively. Because shooting closer to the edge may have favored the vendors in this case, the impact could have been that a solution or solutions may have passed that should not have. 21. The Army stated that “V50 subtests for more robust threats…were executed to the standard protocols.” Our observations and analysis of the data show that this statement is incorrect. Sections 2.2.3.h(2) of the detailed test plans state: “If the first round fired yields a complete penetration, the propellant charge for the second round shall be equal to that of the actual velocity obtained on the first round minus a propellant decrement for 100 ft/s (30 m/s) velocity decrease in order to obtain a partial penetration. If the first round fired yields a partial penetration, the propellant charge for the second round shall be equal to that of the actual velocity obtained on the first round plus a propellant increment for a 50 ft/s (15 m/s) velocity increase in order to obtain a complete penetration. A propellant increment or decrement, as applicable, at 50 ft/s (15 m/s) from actual velocity of last shot shall be used until one partial and one complete penetration is obtained. After obtaining a partial and a complete penetration, the propellant increment or decrement for 50 ft/s (15 m/s) shall be used from the actual velocity of the previous shot.” V50 testing is conducted to discern the velocity at which 50 percent of the shots of a particular threat would penetrate each of the body armor designs. The testing protocols require that, after every shot that is defeated by the body armor, the velocity of the next shot be increased. Whenever a shot penetrates the armor, the velocity should be decreased for the next shot. This increasing and decreasing of the velocities is supposed to be repeated until testers determine the velocity at which 50 percent of the shots will penetrate. In cases in which the armor far exceeds the V50 requirement and is able to defeat the threat for the first six shots, the testing may be halted without discerning the V50 for the plate and the plate may be ruled as passing the requirements. During Preliminary Design Model V50 testing, Army testers would achieve three partial penetrations and then continue to shoot at approximately the same velocity, or lower, for shots 4, 5, and 6 in order to intentionally achieve six partial penetrations. Army testers told us that they did this to conserve plates. According to the testing protocols, Army testers should have continued to increase the charge weight in order to try to achieve a complete penetration and determine a V50 velocity. The effect of this methodology was that solutions were treated inconsistently. Army officials told us that this practice had no effect on which designs passed or failed, which we do not dispute in our report; however, this practice made it impossible to discern the true V50s for these designs based on the results of Preliminary Design Model testing. 22. DOD agreed that Army testers deviated from the testing protocols by measuring back-face deformation at the point of aim. DOD stated that this decision was made by Army leadership in consultation with the office of the Director of Operational Test and Evaluation, because this would not disadvantage any vendor. We agree with DOD that this decision was made by Army leadership in consultation with the office of the Director of Operational Test and Evaluation. We did not independently assess all factors being considered by Army leadership when they made the decision to overrule the Integrated Product Team and the Milestone Decision Authority’s initial decision to measure to the deepest point. DOD also stated that measuring back-face deformation at the point of aim is an accurate and repeatable process. As we pointed out in our previous responses, DOD’s own comments regarding DOD’s Assertion 3 contradict this statement where DOD writes that there were “potential variances between the actual aim point and impact point during testing.” Furthermore, we observed that the aim laser used by Army testers was routinely out of line with where the ballistic was penetrating the yaw card, despite continued adjustments to line up the aim laser with where the ballistic was actually traveling. DOD stated that it is not possible to know the reference point on a curved object when the deepest deformation point is laterally offset from the aim point. We disagree. DOD acknowledges in its response that PEO Soldier had an internally documented process to account for plate curvature when the deepest point of deformation was laterally offset from the point of aim. The use of correction factor tables is a well-known industry standard that has been in place for years, and this standard practice has been used by NIJ laboratories and is well-known by vendors. DOD and the Army presented several statistics on the difference between aim point back-face deformation and deepest point back-face deformation in testing and stated that the difference between the two is small. We do not agree with DOD’s assertion that a difference of 10.66 millimeters is small. In the case of Preliminary Design Model testing, the difference between measuring at the aim point and at the deepest point was that at least two solutions passed Preliminary Design Model testing that otherwise would have failed. These designs passed subsequent First Article Testing but have gone on to fail lot acceptance testing, raising additional questions regarding the repeatability of the Aberdeen Test Center’s testing practices. DOD asserts that the adoption of the laser scanner measurement technique resolves the problems the Army experienced in measuring back- face deformations completely. We would agree that the laser scanner has the potential to be a useful device but when used in the manner in which Aberdeen Test Center used it – without an adequate certification and without a thorough understanding of how the laser scanner might effectively change the standard for a solution to pass – we do not agree that it resolved back-face deformation measurement issues. Aberdeen Test Center officials told us that they did not know what the accuracy of the laser scanner was as it was used during First Article Testing. 23. DOD acknowledged the shortcoming we identified. DOD then asserted that once the deviation of measuring back-face deformation at the point of aim, rather than at the deepest point of depression was identified, those involved acted decisively to resolve the issue. We disagree based on the timeline of events described in our response to DOD’s comments on Preliminary Design Model testing, as well as on the following facts. We were present and observed the Integrated Product Team meeting on March 25 and observed that all members of the Integrated Product Team agreed to start measuring immediately at the deepest point, to score solutions based on this deepest point data, to conserve plates, and then at the end of the testing to make up the tests incorrectly performed during the first third of testing, as needed. We observed Army testers implement this plan the following day. Then, on March 27, Army leadership halted testing for 2 weeks, considered the issue, and then reversed the unanimous decision by the Integrated Product Team and decided to score to the point of aim. The deviation of scoring solutions based on the back-face deformation at the point of aim created a situation in which the Army could not have confidence in any solution that passed the Preliminary Design Model testing. Because of this, the Army had to repeat testing, in the form of First Article Testing, to determine whether the solutions that had passed Preliminary Design Model testing actually met requirements. 24. DOD did not concur with our finding that rain may have impacted the test results. DOD stated that such conditions had no impact upon First Article Testing results. Our statistical analysis of the test data shows failure rates to be significantly higher on November 13 than during other days of testing, and our observations taken during that day of testing and our conversations with industry experts familiar with the clay, including the clay manufacturer, suggest the exposure of the clay to the cold, heavy rain on that day may have been the cause of the high failure rates. Our analysis examined the 83 plates tested against the most potent threat, Threat D. The testing protocols required that two shots for the record be taken on each plate. We performed a separate analysis for the 83 first shots taken on these plates from the 83 second shots taken on the plates. These confirmed statistically that the rate of failure on November 13 was significantly higher than the rate of failure on other days. Further, of the 5 plates that experienced first-shot catastrophic failures during testing, 3 of them (60 percent) were tested on November 13 and all 3 of these were due to excessive back-face deformation. Given that only 9 plates were tested on November 13, while 74 were tested during all the other days of testing combined, it is remarkable that 60 percent of all catastrophic failures occurred on that one day of testing. DOD objected to our inclusion of no-test data in its calculation of first- and second-shot failure rates on November 13. We believe that the inclusion of no-test data is warranted because the Army’s exclusion of such plates was made on a post hoc basis after the shots were initially recorded as valid shots and because the rationale for determining the need for a re-test was not always clear. Additionally, we conducted an analysis excluding the no- test plates identified by DOD and that analysis again showed that the failure rate on November 13 was statistically higher than during the other days of testing, even after the exclusions. Excluding the no-test plates, 38 percent of first shots on November 13 (3 of 8) and 88 percent of second shots (7 of 8) failed. In its response, DOD reports that Aberdeen Test Center’s own statistical analysis of test data for Threat D reveals that the observed failure rate on November 13 is attributable to the “poor performance” of one design throughout testing. DOD asserts that its illustration indicates that “Design K was the weakest design on all days with no rain as well as days with rain.” DOD’s data do not support such a claim. As we have observed, excluding no-test plates, DOD’s data are based on 10 tests of two shots each for each of 8 designs (160 cases total). Each shot is treated as an independent trial, an assumption we find tenuous given that a plate’s structural integrity might be affected by the first shot. To account for date, DOD subdivides the data into cell sizes far too small to derive reliable statistical inferences about failure rates (between 2 and 6 shots per cell), as evidenced by the wide confidence intervals illustrated in DOD’s visual representation of its analysis. Among evidence DOD presented to support its claim that Design K was the weakest performing design on both November 13 and other days is failure rate data for four designs that were not tested on the day in question. For two of the three designs tested on November 13 there were only one or two plates tested on November 13, far too few to conduct reliable statistical tests on differences in design performance. For the other type of plate tested on that day (Design L), the three plates tested had a markedly higher failure rate (3 of 6 shots, or 50 percent) on that day than on other days (when it had, in 14 shots, 5 failures, or a 36 percent failure rate). Design K had a failure rate of 6 of 6 shots (100 percent) on the day in question, compared with 8 of 14 shots (57 percent) on other days. Overall, it is impossible to determine from such a small set of tests whether the lack of statistical significance between different designs’ failure rates on November 13 and other days results from small sample size or a substantive difference in performance. Overall, the Army Test and Evaluation Command’s design-based analysis cannot distinguish between the potential effects of date and design on failure rates because sufficient comparison data do not exist to conduct the kind of multivariate analysis that might resolve this issue. Because the data alone are inadequate for distinguishing between the potential effects of date and design, we continue to recommend that independent experts evaluate the potential effects of variations in materials preparation and testing conditions, including those occurring on November 13, on overall First Article Testing results. Additionally, DOD stated that the clay is largely impervious to water. However, as stated in our report, body armor testers from NIJ-certified private laboratories, Army officials experienced in the testing of body armor, body armor manufacturers, and the manufacturer of the clay used told us that getting water on the clay backing material could cause a chemical bonding change on the clay’s surface. DOD stated that one of its first actions when bringing in the clay is to scrape the top of the clay to level it. However, this only removes clay that is above the metal edge of the box. Clay that is already at or below the edge of the box is not removed by this scraping. We witnessed several instances in which the blade would remove clay at some points, but leave large portions of the clay surface untouched because the clay was below the edge of the box. 25. See comment 11. 26. The DOD is correct that the one particular example regarding deleting official test data only happened once. Fortunately, the results of the retest were the same as the initial test. After we noted this deficiency, Army officials told us that a new software program was being added that would prevent this from occurring again. DOD also stated that only two persons are authorized and able to modify the laser scanner software. We did not verify this statement; however, we assert that DOD needs to have an auditable trail when any such modifications are made and that it should require supervisory review and documentation or logging of these setting changes. 27. DOD acknowledged that the Army did not formally document significant procedure changes that deviated from established testing protocols or assess the impact of these deviations. 28. In our report we stated that the requirement to test at an NIJ-certified laboratory was withdrawn because the Aberdeen Test Center is not NIJ- certified. DOD’s comments on this point do not dispute our statement. Instead, DOD discussed NIJ certification and stated that it does not believe that NIJ certification is appropriate for its test facilities. However, we did not recommend that any DOD test facilities be NIJ-certified or even that NIJ be the outside organization to provide an independent review of the testing practices at Aberdeen Test Center that we did recommend. However, we believe NIJ certification would meet our recommendation for an independent review. Regarding DOD’s comments regarding NIJ certification, DOD asserted that NIJ certification is not appropriate for its test facilities and asserted that there are significant differences between NIJ and U.S. Army body armor test requirements. NIJ certification of a test laboratory and NIJ protocol for testing personal body armor primarily used by law enforcement officers are two distinct and different issues. Similar to a consumer United Laboratories laboratory certification, an NIJ laboratory certification includes an independent peer review of internal control procedures, management practices, and laboratory practices. This independent peer review is conducted to ensure that there are no conflicts of interest, and that the equipment utilized in the laboratory is safe and reliable. This peer review helps to ensure a reliable, repeatable, and accurate test, regardless of whether the test in question is following a U.S. Army testing protocol or a law enforcement testing protocol. NIJ-certified laboratories have consistently proven to be capable of following an Army testing protocol, which is demonstrated by the fact that NIJ-certified laboratories have conducted previous U.S. Army body armor source selection testing in accordance with First Article Testing protocol, as well as lot acceptance tests. The slide DOD included in its comments is not applicable here because it deals with the difference between testing protocols – the protocols for Army Interceptor Body Armor tests and the NIJ protocol for testing personal body armor primarily used by law enforcement officers. NIJ certification of a laboratory and NIJ certification of body armor for law enforcement purposes are two different things. 29. DOD stated that we were incorrect in asserting that the Army decided to rebuild small arms ballistics testing facilities at Aberdeen Test Center after the 2007 House Armed Services Committee hearing. Instead, DOD stated that the contract to construct additional test ranges at the Aberdeen Test Center Light Armor Range was awarded in September 2006 and that construction was already underway at the time of June 2007 hearing. DOD also stated that this upgrade was not in response to any particular event but was undertaken to meet projected future Army ballistic test requirements. Army officials we spoke with before testing for this solicitation told us that this construction was being completed in order to perform the testing we observed. As of July 2007, the Light Armor Range included two pre-WWII era ballistic lanes and four modern lanes partially completed. However, we noted that, as of July 2007, the lanes we visited were empty and that none of the testing equipment was installed; only the buildings were completed. In addition to the physical rebuilding of the test sites, the Amy also re-built its workforce to be able to conduct the testing. As stated on page 4 of DOD’s comments, PEO Soldier has instituted an effort to transfer testing expertise and experience from PEO Soldier to the Army Test and Evaluation Command. Prior to the start of testing we observed that Aberdeen Test Center hired, transferred in, and contracted for workers to conduct the testing. These workers were then trained by Aberdeen Test Center and conducted pilot tests in order to learn how to conduct body armor testing. We observed parts of this training, in person, and other parts via recorded video. In addition, we spoke with officials during this training and preparation process. From our observations and discussions with Army testers and PEO Soldier officials, we believe this process to have been a restarting of small arms ballistic testing capabilities at Aberdeen Test Center. Based on DOD’s comments, we clarified our report to reflect this information. In addition to the contact named above, key contributors to this report were Cary Russell, Assistant Director; Michael Aiken; Gary Bianchi; Beverly Breen; Paul Desaulniers; Alfonso Garcia; William Graveline; Mae Jones; Christopher Miller; Anna Maria Ortiz; Danny Owens; Madhav Panwar; Terry Richardson; Michael Shaughnessy; Doug Sloane; Matthew Spiers; Karen Thornton; and John Van Schaik. | The Army has issued soldiers in Iraq and Afghanistan personal body armor, comprising an outer protective vest and ceramic plate inserts. GAO observed Preliminary Design Model testing of new plate designs, which resulted in the Army's awarding contracts in September 2008 valued at a total of over $8 billion to vendors of the designs that passed that testing. Between November and December 2008, the Army conducted further testing, called First Article Testing, on these designs. GAO is reporting on the degree to which the Army followed its established testing protocols during these two tests. GAO did not provide an expert ballistics evaluation of the results of testing. GAO, using a structured, GAO-developed data collection instrument, observed both tests at the Army's Aberdeen Test Center, analyzed data, and interviewed agency and industry officials to evaluate observed deviations from testing protocols. However, independent ballistics testing expertise is needed to determine the full effect of these deviations. During Preliminary Design Model testing the Army took significant steps to run a controlled test and maintain consistency throughout the process, but the Army did not always follow established testing protocols and, as a result, did not achieve its intended test objective of determining as a basis for awarding contracts which designs met performance requirements. In the most consequential of the Army's deviations from testing protocols, the Army testers incorrectly measured the amount of force absorbed by the plate designs by measuring back-face deformation in the clay backing at the point of aim rather than at the deepest point of depression. Army testers recognized the error after completing about a third of the test and then changed the test plan to call for measuring at the point of aim and likewise issued a modification to the contract solicitation. At least two of the eight designs that passed Preliminary Design Model testing and were awarded contracts would have failed if measurements had been made to the deepest point of depression. The deviations from the testing protocols were the result of Aberdeen Test Center's incorrectly interpreting the testing protocols. In all these cases of deviations from the testing protocols, the Aberdeen Test Center's implemented procedures were not reviewed or approved by the Army and Department of Defense officials responsible for approving the testing protocols. After concerns were raised regarding the Preliminary Design Model testing, the decision was made not to field any of the plate designs awarded contracts until after First Article Testing was conducted. During First Article Testing, the Army addressed some of the problems identified during Preliminary Design Model testing, but GAO observed instances in which Army testers did not follow the established testing protocols and did not maintain internal controls over the integrity and reliability of data, raising questions as to whether the Army met its First Article Test objective of determining whether each of the contracted designs met performance requirements. The following are examples of deviations from testing protocols and other issues that GAO observed: (1) The clay backing placed behind the plates during ballistics testing was not always calibrated in accordance with testing protocols and was exposed to rain on one day, potentially impacting test results. (2) Testers improperly rounded down back-face deformation measurements, which is not authorized in the established testing protocols and which resulted in two designs passing First Article Testing that otherwise would have failed. Army officials said rounding is a common practice; however, one private test facility that rounds told GAO that they round up, not down. (3) Testers used a new instrument to measure back-face deformation without adequately certifying that the instrument could function correctly and in conformance with established testing protocols. The impact of this issue on test results is uncertain, but it could call into question the reliability and accuracy of the measurements. (4) Testers deviated from the established testing protocols in one instance by improperly scoring a complete penetration as a partial penetration. As a result, one design passed First Article Testing that would have otherwise failed. With respect to internal control issues, the Army did not consistently maintain adequate internal controls to ensure the integrity and reliability of test data. In one example, during ballistic testing, data were lost, and testing had to be repeated because an official accidentally pressed the delete button and software controls were not in place to protect the integrity of test data. Army officials acknowledged that before GAO's review they were unaware of the specific internal control problems we identified. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Over their lifetimes, men and women differ in many ways that have consequences for how much they will receive from Social Security and pensions. Women make up about 60 percent of the elderly population and less than half of the Social Security beneficiaries who are receiving retired worker benefits, but they account for 99 percent of those beneficiaries who receive spouse or survivor benefits. A little less than half of working women between the ages of 18 and 64 are covered by a pension plan, while slightly over half of working men are covered. The differences between men and women in pension coverage are magnified for those workers nearing retirement age—over 70 percent of men are covered compared with about 60 percent of women. Labor force participation rates differ for men and women, with men being more likely, at any point in time, to be employed or actively seeking employment than women. The gap in labor force participation rates, however, has been narrowing over time as more women enter the labor force, and the Bureau of Labor Statistics predicts it will narrow further. In 1948, for example, women’s labor force participation rate was about a third of that for men, but by 1996, it was almost four-fifths of that for men. The labor force participation rate for the cohort of women currently nearing retirement age (55 to 64 years of age) was 41 percent in 1967 when they were 25 to 34 years of age. The labor force participation rate for women who are 25 to 34 years of age today is 75 percent—an increase of over 30 percentage points. Earnings histories also affect retirement income, and women continue to earn lower wages than men. Some of this difference is due to differences in the number of hours worked, since women are more likely to work part-time and part-time workers earn lower wages. However, median earnings of women working year-round and full-time are still only about 70 percent of men’s. The lower labor force participation of women leads to fewer years with covered earnings on which Social Security benefits are based. In 1993, the median number of years with covered earnings for men reaching 62 was 36 but was only 25 for women. Almost 60 percent of men had 35 years with covered earnings, compared with less than 20 percent of women. Lower annual earnings and fewer years with covered earnings lead to women’s receiving lower monthly retired worker benefits from Social Security, since many years with low or zero earnings are used in the calculation of Social Security benefits. On average, the retired worker benefits received by women are about 75 percent of those received by men. In many cases, a woman’s retired worker benefits are lower than the benefits she is eligible to receive as the spouse or survivor of a retired worker. Women tend to live longer than men and thus may spend many of their later retirement years alone. A woman who is 65 years old can expect to live an additional 19 years (to 84 years of age), and a man of 65 can expect to live an additional 15 years (to 80 years of age). By 2070, the Social Security Administration projects that a 65-year-old woman will be able to expect to live another 22 years, and a 65-year-old-man, another 18 years. Additionally, husbands tend to be older than their wives and so are likely to die sooner. Differences in longevity do not currently affect the receipt of monthly Social Security benefits but can affect income from pensions if annuities are purchased individually. women. The authors estimated that, after 35 years of participation in the plan at historical yields and identical contributions, the difference in investment behavior between men and women can lead to men having a pension portfolio that is 16 percent larger. Social Security provisions and pension plan provisions differ in several ways (see app. I for a summary). Under Social Security, the basic benefit a worker receives who retires at the normal retirement age (NRA) is based on the 35 years with the highest covered earnings. The formula is progressive in that it guarantees that higher-income workers receive higher benefits, while the benefits of lower-income workers are a higher percentage of their preretirement earnings. The benefit is guaranteed for the life of the retired worker and increases annually with the cost of living. may elect, along with the spouse, to take a single life annuity or a lump-sum distribution if allowed under the plan. When workers retire, they are uncertain how long they will live and how quickly the purchasing power of a fixed payment will deteriorate. They run the risk of outliving their assets. Annuities provide insurance against outliving assets. Some annuities provide, though at a higher cost or reduced initial benefit, insurance against inflation risk, although annuity benefits often do not keep pace with inflation. Many pension plans are managed under a group annuity contract with an insurance company that can provide lifetime benefits. Individual annuities, however, tend to be costly. Under Social Security, the dependents of a retired worker may be eligible to receive benefits. For example, the spouse of a retired worker is eligible to receive up to 50 percent of the worker’s basic benefit amount, while a dependent surviving spouse is eligible to receive up to 100 percent of the deceased worker’s basic benefit. Furthermore, divorced spouses and survivors are eligible to receive benefits under a retired worker’s Social Security record provided they were married for at least 10 years. If the retired worker has a child under 18 years old, the child is eligible for Social Security benefits, as is the dependent nonelderly parent of the child. The retired worker’s Social Security benefit is not reduced to provide benefits to dependents and former spouses. Pensions, both public and private, generally do not offer the same protections to dependents as Social Security. Private and public pension benefits are based on a worker’s employment experience and not the size of the worker’s family. At retirement, a worker and spouse normally receive a joint and survivor annuity so that the surviving spouse will continue to receive a pension benefit after the retired worker’s death. A worker, with the written consent of the spouse, can elect to take retirement benefits in the form of a single life annuity so that benefits are guaranteed only for the lifetime of the retired worker. payment options. Under this act, a joint and survivor annuity became the normal payout option and written spousal consent is required to choose another option. This requirement was prompted partly by testimony before the Congress by widows who stated that they were financially unprepared at their husbands’ death because they were unaware of their husbands’ choice to not take a joint and survivor annuity. Through the spousal consent requirement, the Congress envisioned that, among other things, a greater percentage of married men would retain the joint and survivor annuity and give their spouses the opportunity to receive survivor benefits. The monthly benefits under a joint and survivor annuity, however, are lower than under a single life annuity. Moreover, pension plans do not generally contain provisions to increase benefits to the retired worker for a dependent spouse or for children. As under Social Security, divorced spouses can also receive part of the retired worker’s pension benefit if a qualified domestic relations order is in place. However, the retired worker’s pension benefit is reduced in order to pay the former spouse. The three alternative proposals of the Social Security Advisory Council would make changes of varying degrees to the structure of Social Security. The key features of the proposals are summarized in appendix II. The Maintain Benefits (MB) plan would make only minor changes to the structure of current Social Security benefits. The major change that would affect women’s benefits is the extension of the computation period for benefits from 35 years to 38 years of covered earnings. Currently, earnings are averaged over the 35 years with the highest earnings to compute a worker’s Social Security benefits. If the worker has worked less than 35 years, then some of the years of earnings used in the calculation are equal to zero. Extending the computation period for the lifetime average earnings to 38 years would have a greater impact on women than on men. Although women’s labor force participation is increasing, the Social Security Administration forecasts that fewer than 30 percent of the women retiring in 2020 will have 38 years of covered earnings, compared with almost 60 percent of men. The Individual Accounts (IA) plan would keep many features of the current Social Security system but add an individual account modeled after the 401(k) pension plan. Workers would be required to contribute an additional 1.6 percent of taxable earnings to their individual account, which would be held by the government. Workers would direct the investment of their account balances among a limited number of investment options. At retirement, the distribution from this individual account would be converted by the government into an indexed annuity. The IA plan, like the MB plan, would extend the computation period to 38 years; it would also change the basic benefit formula by lowering the conversion factors at the higher earnings level. This plan would also accelerate the legislated increase in the normal retirement age and then index it to future increases in longevity. As a consequence of these changes, basic Social Security benefits would be lower for all workers, but workers would also receive a monthly payment from the annuitized distribution from their individual account, which proponents claim would offset the reduction in the basic benefit. In addition to extending the computation period, elements of the IA plan that would disproportionately affect women are the changes in benefits received by spouses and survivors, since women are much more likely to receive spouse and survivor benefits. The spouse benefit would be reduced from 50 percent of the retired worker’s basic benefit amount to 33 percent. The survivor benefit would increase from 100 percent of the deceased worker’s basic benefit to 75 percent of the couple’s combined benefit if the latter was higher. These changes would probably result in increased lifetime benefits for many women. Additionally, at retirement a worker and spouse would receive a joint and survivor annuity for the distribution of their individual account unless the couple decided on a single life annuity. Security payroll tax into the account, which would not be held by the government. Proponents of the PSA plan claim that over a worker’s lifetime the tier I benefits plus the tier II distribution would be larger than the lifetime Social Security benefits currently received by retired workers. The worker would direct the investment of his or her account assets. At retirement, workers would not be required to annuitize the distribution from their personal security account but could elect to receive a lump-sum payment. This could potentially affect women disproportionately, since the worker is not required to consult with his or her spouse regarding the disposition of the personal account distribution. Under the PSA plan, the tier I benefit for spouses would be equal to the higher of their own tier I benefit or 50 percent of the full tier I benefit. Furthermore, spouses would receive their own tier II accumulations, if any. The tier I benefit for a survivor would be 75 percent of the benefit payable to the couple; in addition, the survivor could inherit the balance of the deceased spouse’s personal security account assets. Many of the proposed changes to Social Security would affect the benefits received by men and by women differently. The current Social Security system is comparable to a defined benefit plan’s paying a guaranteed lifetime benefit that is increased with the cost of living. Each of the Advisory Council proposals would potentially change the level of that benefit, and two of the proposals would create an additional defined contribution component. Not only would retired worker benefits be changed by these proposals, but the level of benefits for spouses and survivors would be affected. the account balances at retirement would depend on the contributions made to the worker’s account and investment returns or losses on the account assets. Since women tend to earn lower wages, they would be contributing less, on average, than men to their accounts. Furthermore, even if contributions were equal, women tend to be more conservative investors than men, which could lead to lower investment returns. Consequently, women would typically have smaller account balances at retirement and would receive lower benefits than men. The difference in investment strategy could lead to a situation in which men and women with exactly the same labor market experiences receive substantially different Social Security benefits. The extent to which investor education can close the gap in investment behavior between men and women is unknown. The two Advisory Council proposals with individual or personal accounts differ in the handling of the distribution of the account balances at retirement. The IA plan would require annuitization of the distribution at retirement, and choosing a single life annuity or a joint and survivor annuity would be left to the worker and spouse. If the single life annuity option for individual account balances was chosen, then the spouse would receive the survivor’s basic benefit after the death of the retired worker plus the annuitized benefit based on the work records of both individuals. The PSA plan would not require that the private account distribution be annuitized at retirement. A worker and spouse could take the distribution as a lump sum and attempt to manage their funds so that they did not outlive their assets. If the assets were exhausted, the couple would have only their basic tier I benefits, plus any other savings and pension benefits. Furthermore, even if personal account tier II assets were left after the death of the retired worker, the balance of the PSA account would not necessarily have to be left to the survivor. If a worker and spouse chose to purchase an annuity at retirement, then the couple would receive a lower monthly benefit than would be available from a group annuity. although the expected lifetime payments would be the same, the monthly payments to the woman would be lower, since women have longer life expectancies. Even though the current provisions of Social Security are gender neutral, differences during the working and retirement years may lead to different benefits for men and women. For example, differences in labor force attachment, earnings, and longevity lead to women’s being more likely than men to receive spouse or survivor benefits. Women who do receive retired worker benefits typically receive lower benefits than men. As a result of lower Social Security benefits and the lower likelihood of receiving pension benefits, among other causes, elderly single women experience much higher poverty rates than elderly married couples and elderly single men. Social Security is a large and complex program that protects most workers and their families from income loss because of a worker’s retirement. Public and private pension plans do not offer the social insurance protections that Social Security does. Pension benefits are neither increased for dependents nor generally indexed to the cost of living as are Social Security benefits. Typically, at retirement a couple will receive a joint and survivor annuity that initially pays monthly benefits that are 15 to 20 percent lower than if they had chosen to forgo the survivor benefits with a single life annuity. Furthermore, under a qualified domestic relations order, a divorced retired worker’s pension benefits may be reduced to pay benefits to a former spouse. While the three alternative proposals of the Social Security Advisory Council are intended to address the long-term financing problem, they would make changes that could affect the relative level of benefits received by men and women. Each of the proposals has the potential to exacerbate the current differences in benefits between men and women. Narrowing the gap in labor force attachment, earnings, and investment behavior may reduce the differences in benefits. But as long as these differences remain, men and women will continue to experience different outcomes with regard to Social Security benefits. This concludes my prepared statement. I would be happy to answer any questions you or other Members of the Subcommittee may have. For more information on this testimony, please call Jane Ross on (202) 512-7230; Frank Mulvey, Assistant Director, on (202) 512-3592; or Thomas Hungerford, Senior Economist, on (202) 512-7028. | GAO discussed the impacts of proposals to finance and restructure the Social Security system, specifically the impacts on the financial well-being of women. GAO noted that: (1) its work shows that, despite the provisions of the Social Security Act that do not differentiate between men and women, women tend to receive lower benefits than men; (2) this is due primarily to differences in lifetime earnings because women tend to have lower wages and fewer years in the workforce; (3) women's experience under pension plans also differs from men's not only because of earnings differences but also because of differences in investment behavior and longevity; (4) moreover, public and private pension plans do not offer the same social insurance protections that Social Security does; (5) furthermore, some of the provisions of the Social Security Advisory Council's three proposals may exacerbate the differences in men and women's benefits; (6) for example, proposals that call for individual retirement accounts will pay benefits that are affected by investment behavior and longevity; and (7) expected changes in women's labor force participation rates and increasing earnings will reduce but probably not eliminate these differences. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Social Security Disability Insurance (DI) and Supplemental Security Income (SSI) programs are the nation’s two largest federal programs providing cash payments to people with severe long-term disabilities. Between 1985 and 1995, the number of DI recipients increased almost 50 percent to about 5.7 million, and the number of disabled SSI recipients increased from 2.5 million to 4.9 million. In fiscal year 1995, the Social Security Administration (SSA) distributed over $61 billion in disability benefits for its DI and SSI programs. Over the past decade, SSA’s Office of Hearings and Appeals (OHA) has experienced unprecedented growth in both its backlog of DI and SSI hearings requests and the time it takes to process a disability appeal. While the agency has undertaken several efforts over the years to address the backlog issue, workload increases and long-standing problems associated with the program have impeded their success. The rapid growth in OHA’s pending case backlog and longer case-processing times have caused hardship for those disability claimants who are unable to work or to afford needed medical treatment while awaiting a final decision. On average, it takes more than a year to receive a final OHA decision from the time a claimant first files an application of disability. This extended waiting period has raised congressional concerns about SSA’s disability decision-making process. The DI program, enacted in 1956 under title II of the Social Security Act, provides monthly cash insurance benefits to insured severely disabled workers. The SSI program, enacted in 1972 under title XVI, provides monthly cash payments to aged, blind, or disabled people whose income and resources fall below a certain threshold. The Social Security Act defines disability under both programs as an inability to engage in substantial gainful activity by reason of a severe physical or mental impairment. The impairment must be medically determinable and expected to last at least a year or result in death. Claimants file an application for disability benefits—both DI and SSI—with one of SSA’s over 1,300 field offices. Applications, along with supporting medical evidence, are then forwarded to the appropriate state disability determination service (DDS). SSA arranges with state DDSs to make the initial medical determination of eligibility in accordance with SSA’s policies and procedures. Claimants who are dissatisfied with the initial DDS determination may request a “reconsideration” of the claim within 60 days of their notice of decision. During the reconsideration review, all evidence is reevaluated by DDS personnel who were not involved in the original decision, and a new, independent decision is made on the merits of the case. Claimants who disagree with the reconsideration decision have the right to a hearing before an administrative law judge (ALJ) in SSA’s OHA. A request for hearing may be filed by mail or telephone, or in person at either an SSA field office or an OHA hearing office. Upon receipt of a hearing request, hearing office support staff review and prepare the case file for hearing. If necessary, staff may recommend to the ALJ that additional medical evidence be developed before holding a hearing. The hearing is generally the first time in the disability determination process that a claimant has the opportunity for a face-to-face meeting with a decisionmaker. At the hearing, the claimant and witnesses—who may include medical or vocational experts—provide testimony. The ALJ inquires into the issues, receives relevant documents into evidence, and allows the claimant or the claimant’s representative to present arguments and examine witnesses. If necessary, the ALJ may further update the evidence after the hearing. When this process is completed, the ALJ issues a decision based on his or her assessment of the evidence in the case. Claimants who disagree with an ALJ denial are given another 60 days to request that the case be reviewed by SSA’s Appeals Council. A request for review must be filed through either a field office or a hearing office or directly with the Appeals Council. The Appeals Council may dismiss the request, affirm an ALJ’s decision, remand the case to an ALJ for further action, or issue a new decision. To determine the appropriate action, Council members, assisted by a large staff of analysts, decide whether the decision was supported by the evidence. The Appeals Council’s decision—or the decision of the ALJ, if the Appeals Council dismisses the request—becomes SSA’s final decision. After all SSA administrative remedies are exhausted, a claimant has further appeal rights within the federal court system, up to and including the U.S. Supreme Court. (See fig. 1.1.) The Administrative Procedure Act (APA), enacted by the Congress in 1946, protects the decisional independence of ALJs. To ensure ALJ independence, APA grants ALJs certain specific exemptions from normal management controls. For example, federal agencies may not apply performance appraisal requirements to ALJs and may remove ALJs only for “good cause,” as determined by the Merit Systems Protection Board. These safeguards were put in place to ensure that ALJ judgments were independent and that ALJs would not be paid, promoted, or discharged arbitrarily or for political reasons by the agency. However, ALJ independence is not unlimited. Because they are SSA employees, ALJs are subject to agency rules and regulations, and they must apply even those with which they disagree. Further, ALJ independence does not negate SSA’s authority to implement procedures for supervising and reviewing the ALJ decision-making process to ensure that agency policies and procedures are followed. The role of ALJs at SSA differs from that of other ALJs in the federal government in that SSA ALJs are responsible for both developing the hearings evidence and deciding the case. In other executive branch agencies, the responsibility for developing evidence is left to the claimants and their representatives. SSA hearings also differ from those of other executive branch agencies in that they are informal, nonadversarial proceedings; that is, SSA does not present a case challenging a claimant’s disability claim. Most other executive branch ALJs hold hearings that are formal, adversarial, and similar to a trial. During such hearings, attorneys on both sides present witnesses and documentary evidence and cross-examine witnesses in order to present the facts in a light favorable to their case. OHA headquarters is located in Falls Church, Virginia, apart from SSA headquarters in Baltimore. OHA operates 10 regional offices, 132 hearing offices, 3 class action management centers, and 5 word processing centers. Of OHA’s 7,100 employees, about 1,000 are located in Falls Church. OHA is headed by the Associate Commissioner for Hearings and Appeals, who is responsible for administering the hearings and appeals process and reports directly to SSA’s Deputy Commissioner for Programs and Policy. (See app. I for an SSA organization chart and app. II for an OHA organization chart). OHA’s Chief ALJ reports directly to the Associate Commissioner for Hearings and Appeals and is responsible for managing about 5,000 hearing office employees located in 10 regions. Each OHA region is headed by a regional chief ALJ (RCALJ), who is responsible for the operations of hearing offices in his or her respective region. In every hearing office, a hearing office chief ALJ (HOCALJ) oversees day-to-day office operations and provides guidance to ALJs, professional staff, and support personnel. Due to the rapid growth in OHA backlogs and case-processing times and their impact on public service, in July 1994 the former Chairman, and now Ranking Minority Member, of the House Committee on Ways and Means asked us to examine SSA’s efforts to address the problem. More specifically, the objectives of our assignment were to determine (1) those factors contributing to the growth in the backlog of appealed cases, (2) what steps SSA has taken in the past to address this backlog problem, (3) what SSA is currently doing to reduce the appellate backlog, and (4) what needs to be done in the long term to make the disability appeals process more timely and efficient. In conducting our review, we analyzed data on OHA workloads, backlogs, and processing times; reviewed over 50 government and nongovernment studies conducted over the past 20 years on the disability determination and appeals process (see app. IV); examined SSA’s previous initiatives to address OHA backlogs and improve the hearings and appeals process; and reviewed SSA’s Short-Term Disability Plan (STDP) and the agency’s longer-term Plan for a New Disability Claim Process (redesign plan). To supplement information obtained from the various reports and initiatives outlined above, we interviewed key SSA and OHA headquarters and regional management officials, as well as managers responsible for the development and implementation of SSA’s STDP and redesign plan; obtained the views of hearing office officials—chief ALJs, supervisory staff-attorneys, and hearing office managers—regarding SSA’s previous and current efforts to improve program efficiency and address OHA’s pending case backlog; and interviewed officials at state disability determination services in Florida, Georgia, Massachusetts, New York, and Texas to obtain their views on the status and expected impacts of the STDP initiatives. Our review was performed at SSA and OHA headquarters; four SSA regions—Atlanta, Boston, Dallas, and New York; and five OHA regions—Atlanta, Boston, Dallas, New York, and Philadelphia. By examining workload and performance indicators, we judgmentally selected regions and OHA hearing offices that would provide us with varied workload levels as well as varied experiences in managing their workloads. The selected offices also provided us with some geographical dispersion. We conducted our review between July 1994 and February 1996 in accordance with generally accepted government auditing standards. Over the last decade, OHA’s backlog of pending cases and case-processing times have grown rapidly, and claimants are waiting longer for disability decisions. SSA has acknowledged that current workload levels have placed the disability program under increasing public and congressional pressure, and that aggressive measures are necessary to address this “crisis” situation. The growth in OHA’s backlog of cases has been caused, in part, by the rapid surge in disability program applications and ever-increasing appeals to OHA. But backlog growth has also resulted because SSA has not adequately addressed several long-standing problems associated with its disability programs. These problems have been identified in numerous internal and external studies conducted over the last 2 decades. We reviewed these studies and found that SSA’s key long-standing problems can be classified into four basic categories: multiple levels of claims development and decision-making, fragmented program accountability, decisional disparities between DDS and OHA adjudicators, and SSA’s failure to consistently define and communicate its management authority over the ALJs. The number of disabled beneficiaries has steadily increased over the last decade. In 1985, there were 3.9 million DI recipients. In 1995, almost 5.7 million disabled workers and their dependents received more than $40 billion in DI benefits. Most of this growth occurred in the last 3 years, when 1.1 million beneficiaries were added to the rolls. The SSI program grew even more over the last decade, when the number of SSI recipients increased from 2.5 million to 4.9 million. Many factors have contributed to the number of people seeking disability benefits and the subsequent growth in the OHA workload, including the expansion of DI eligibility criteria, program outreach efforts, and poor economic conditions. Between 1985 and 1995, initial DI and SSI applications increased by 57 percent, from 1.6 million to 2.5 million. DDS denial rates for initial applications also increased during the same period, further enlarging the pool of applicants who could request an appeal. The number of requests for OHA hearings increased by 140 percent, from 245,000 in 1985 to 589,000 in 1995. As we have reported previously, the rising rates at which applications for disability benefits and accompanying appeals are being filed have caused tremendous workload pressures and processing delays for OHA. Between 1985 and 1995, OHA’s pending case backlog grew from 107,000 to about 548,000 cases. In addition, the average processing time for cases appealed to OHA—measured from the time a request for hearing is filed by the claimant—increased 110 percent, from 167 days to 350 days. Moreover, aged cases (those pending 270 days or more) increased from 5 percent of pending cases to 39 percent during the same period. Some applicants who have been awarded benefits on appeal to OHA after twice being denied by DDSs have waited more than a year after first applying. Table 2.1 shows the rapid growth in OHA’s workload, pending case backlog, and the time it takes to process an appealed case. In addition to the dramatic increases in workload discussed above, long-standing problems associated with the disability determination and appeals process have contributed to the backlog growth and increased case-processing time at OHA. In 1992, as part of its efforts to develop a number of strategic priority goals, SSA reviewed numerous internal and external studies of the disability determination and appeals process, several of which were completed more than 20 years ago. The agency acknowledged that, despite rapid workload increases and enormous changes in available technology, demographics, and the types of disabilities qualifying for benefits, disability processes had remained basically the same since the DI program was established in the 1950s. We also reviewed the above studies, and several other government and nongovernment reviews conducted over the last several decades, and categorized the key long-standing problems affecting SSA’s disability programs as (1) multiple levels of claims development and decision-making, (2) fragmented accountability for claims processing, (3) decisional disparities between DDS and OHA adjudicators, and (4) SSA’s failure to consistently define and communicate its management authority over the ALJs. The relationship of these problems to OHA’s pending case backlog and increased case-processing time is discussed below. SSA’s internal planning documents show that multiple levels of claims development and decision-making throughout the disability program have negatively affected OHA’s ability to provide timely and efficient service to all claimants who appeal. Within SSA, a denied disability claim may pass through as many as four decision-making levels (initial, reconsideration, ALJ hearing, and Appeals Council) before a final decision is rendered. As a claim moves from one level to the next it is readjudicated, and multistep procedures for review, evidence collection, and decision-making are employed. In addition to delays associated with multiple layers of review and decision-making, delays also occur as a claim moves from one employee or facility to another and waits at each employee’s desk to be processed. As workloads have grown, the amount of time a claim waits at each processing point has increased. Since 1985, average case-processing time at OHA has grown from 167 days to about 350 days. Following a 1992 review of OHA operations, SSA found that claimants can wait as long as 550 days to receive a hearing decision notice. The same report noted that, in the case of one claim, only 4 days of the 550 involved actual work on the claim. Some of the delay is necessary, however, because of scheduling and due process notice requirements. Other delays are often claimant initiated and may lead to hearing postponements or the need to further develop the evidentiary record. SSA has acknowledged that no single organizational component is accountable for the overall efficiency of disability claims processing, and that fragmentation issues have negatively affected the efficiency of the process. Currently, several organizational components are involved in disability claims processing—field offices, DDSs, hearing offices, and the Appeals Council—and each is accountable and responsible for reaching its own goals without responsibility for the overall disability claims process. SSA’s own internal reviews have found that poor coordination among components has reinforced a lack of understanding among OHA staff of the roles and responsibilities of other components and created the perception that no one is in charge of the disability programs. Fragmentation in the disability process is further evidenced by OHA’s organizational and operational separation from the rest of SSA. OHA’s headquarters is located in Falls Church, Virginia, while SSA’s headquarters is located in Baltimore. OHA regional staff are also separated from SSA regional staff. SSA’s own reviews have found that organizational fragmentation has led to a lack of interaction between OHA and the rest of SSA and fostered a “stepchild” mentality among many OHA employees. For example, SSA found that OHA staff had little sense of belonging to the wider SSA and were unfamiliar with its organizational structure, philosophy, and goals. This mentality has affected SSA’s ability to implement operational plans for the disability programs. Finally, SSA lacks a common automated database for managing claims as they move through the various components involved in the disability determination process. Consequently, as a claim moves from one organizational level to another, some data must be manually reentered into the computer by the various components, and the status of disability claims is not adequately recorded for reference by others. Outdated manual processes and fragmented automated systems have made improving the disability determination and appeals process difficult. In 1994, ALJs allowed benefits in about 75 percent of the cases they decided. By awarding a relatively high percentage of cases that DDSs have previously denied, ALJs may encourage more appeals to OHA. While all of the reasons for decisional disparities are not conclusively known, many have hypothesized that possible causes include the de novo hearings process, which allows claimants to submit additional evidence upon appeal; face-to-face interviews between ALJs and claimants; decisional errors by both DDSs and ALJs; and different applications of disability decisional policies at the DDS and ALJ levels. Some decisional disparities may be attributable to OHA’s de novo hearings process. Under this process, the ALJ does not review the DDS’ decision or rule on its adequacy. Instead, the ALJ conducts what is called a de novo hearing in which evidence is considered and weighed again, and the ALJ issues a decision based on his or her own findings. With the de novo hearing, claimants may submit new evidence to the ALJ that may not have been available at the time of the DDS review and decision. SSA’s reviews have found that more than a quarter of ALJ awards are based on such new evidence, which may include claimant testimony that their condition has worsened since the original DDS review. Thus, by design, some differences in decisional results are built into the system. Also, face-to-face interviews between ALJs and claimants may lead to disparate decisions. The ALJ hearing is generally the first time that claimants have the opportunity for a meeting with a decisionmaker. Unlike DDSs, which perform a paper review of the file to determine disability, ALJs personally interview claimants concerning their disability claim. A 1982 SSA study reported that a personal appearance by claimants during the hearing increased the likelihood of an ALJ allowance. In 1989, we reported that hearings provide ALJs with the opportunity to extensively question claimants and that, as a result, ALJs often reverse DDS decisions because they determine that claimants are more limited in their activities than DDSs had perceived. Errors made by both DDS staff and ALJs may also contribute to disparities. In a 1994 SSA study, a group of medical consultants and disability examiners found a 29-percent DDS error rate for cases appealed to OHA. In the same study, a group of ALJs found a 19-percent error rate in ALJ allowances. These relatively high rates of error suggest that obtaining consistency across the two levels may be difficult. Finally, some disparities may be attributable to SSA’s differing mechanisms for providing decisional guidance to DDSs and ALJs. To determine disability, SSA has a single standard composed of various statutes, regulations, Social Security Rulings, and court rulings governing eligibility. DDS decisionmakers are required to use SSA’s Program Operations Manual System (POMS), which is SSA’s detailed interpretation of the standard. ALJs, on the other hand, are not required to use POMS, which provides little decisional latitude. Instead, they base their decisions on their own interpretation of the statutes, regulations, Social Security Rulings, and court rulings. To an undetermined extent, different interpretations of the same disability standard may cause DDSs and ALJs to reach disparate decisions on the same claim. APA protects ALJ decisional independence. Although ALJs are SSA employees, APA prohibits SSA management from taking actions that might interfere with an ALJ’s ability to conduct full and impartial hearings. However, SSA has not consistently defined and communicated to regional and hearing office management the types of management actions that are legally permissible for managing ALJs without hindering judicial independence. SSA’s 1992 Office of Workforce Analysis report found that many ALJs are operating under the belief that they are exempted by APA from nearly all management control. As a result, SSA has experienced numerous legal and operational challenges to its efforts to better manage the appeals process. SSA management has also been reluctant to exercise its management authority over ALJs for fear it will violate APA. The APA issue continues to be important today, because the success of the redesign plan may be affected by the degree of ALJ cooperation and the extent to which SSA can mandate ALJ compliance with the plan’s initiatives. In 1989, we reported that, since the 1970s, ALJs had successfully opposed management initiatives to increase their productivity on the grounds that such SSA actions interfered with their decisional independence. Many ALJs believe they need to closely protect their judicial independence because of what they perceive as past excesses of agency authority. For example, in 1977 several ALJs sued SSA when it attempted to impose case production quotas on them. The ALJs alleged that SSA’s actions violated their decisional independence under APA and the Fifth Amendment of the Constitution. SSA ultimately settled the lawsuit, rescinded its policy of establishing quotas for ALJ dispositions, and revised transfer and training policies to remove any mention of production figures. In the early 1980s, SSA began targeting the decisions of ALJs with high award rates for special review. According to SSA, these reviews were conducted in response to congressional concerns that ALJs with high allowance rates could be more prone to errors. As a result of these reviews, some ALJs were to be subject to retraining and possible disciplinary actions. The initiative prompted a lawsuit by the Association of Administrative Law Judges, which claimed that the practice of targeting selected ALJs violated their decisional independence. Before the court’s ruling, SSA entered into a settlement agreement with the ALJs and rescinded its practice of targeting individual decisionmakers. Although APA is an important safeguard of due process, SSA’s own studies confirm that, in many instances, the act has been interpreted in a way that has impeded SSA’s ability to effectively manage day-to-day hearing office work and to implement uniform policies and procedures. For example, SSA’s 1992 Office of Workforce Analysis report noted that an “extreme” interpretation of APA by many ALJs had led to a lack of clear lines of management authority within hearing offices and impeded effective service delivery. SSA also found that hearing offices lacked procedural consistency and effective mechanisms to share “best practices,” because many ALJs believed judicial independence entitled them to establish their own “unique” work flow procedures. The report also noted that inconsistent procedures resulted in significant variations in the content and organization of hearing office files and created obvious problems when case files were transferred among offices to balance OHA’s workloads. SSA also reported a wide variety of organizational configurations among hearing offices, despite an agency effort to actively encourage “pooling” hearing office resources to increase efficiency and distribute work more evenly. Many ALJs opposed the pooling initiative, preferring instead a “unit” system in which each ALJ had his or her own personal staff. SSA reported that several ALJs had rejected the pooling configuration despite the agency’s findings that the “unit” system unnecessarily increased case-processing time. However, the report did not include any directives or recommendations for mandating ALJ compliance with SSA’s pooling efforts. In an early 1990s plan to improve the disability appeals process, SSA noted that significant ambiguities existed regarding the limits APA imposed on SSA management practices and that APA issues underlie many of the problems affecting the disability program’s variations in hearing office procedures, work flow, and workload management. In April 1995, SSA once again acknowledged the constraints APA imposed on its ability to manage and called for better clarifying APA principles. Our most recent field work confirmed that SSA still has not consistently defined and communicated the types of management actions that are legally permissible under APA. During our review, a number of SSA and OHA managers and staff told us that despite SSA’s recognition of the problems associated with ensuring ALJ compliance with agency initiatives, it has not resolved the issue. Staff commonly complained that ALJs often used APA protections to oppose initiatives they did not agree with and conceded that managers were reluctant to mandate ALJ compliance for fear of violating the act. They also told us that ALJ opposition to prior agency initiatives to improve the appeals process has contributed to the growth in OHA’s backlog of cases and that reducing the backlog will be difficult unless SSA addresses the APA issue. Officials in SSA’s Office of General Counsel also noted that while SSA is aware of the management tools available to it, there have been inconsistencies in the way this information has been communicated agencywide. In their opinion, SSA needs to develop a consistent APA message and thoroughly communicate it to both SSA and OHA field personnel. During the past decade, OHA’s backlog of pending cases continued to grow, even though SSA hired more professional and support personnel and increased its reliance on overtime to service the appeals workload. In 1994, SSA initiated both short- and long-term plans in response to the continued rapid growth in OHA’s pending case backlog and increasing criticism of SSA’s ability to effectively manage the DI and SSI caseloads. STDP represents SSA’s near-term effort to reduce OHA’s backlog of pending cases and improve case-processing times. The plan does not directly address the long-standing problems affecting SSA’s disability appeals process but instead relies on temporary “emergency” measures to alleviate workload pressures at OHA until SSA’s longer-term strategy is under way. Although STDP’s initiatives are now under way, implementation delays and the limited impact of key initiatives may impede SSA’s short-term efforts to achieve its backlog reduction goals. SSA’s second initiative—its Plan for a New Disability Claim Process, or redesign plan—when fully implemented, is intended to result in significant long-term improvements in the quality, accuracy, speed, and efficiency of disability claims processes. The plan is scheduled to be implemented in phases that will be completed sometime in fiscal year 2000. The redesign effort, which provides a framework for radically reengineering the entire disability process, is aimed at addressing three of the four long-standing problems we identified: multiple levels of claims development and decision-making, fragmented program accountability, and inconsistent decisions between DDS and OHA adjudicators. While SSA believes the redesign plan will eventually address many systemic program problems, the plan was still in the early implementation and testing stages at the time of our review. More importantly, the redesign plan does not include an initiative to clearly and consistently define and communicate SSA’s management authority over the ALJs. APA constraints have been a source of considerable management difficulties for many years, and if SSA does not act to address this issue, it may be hindered in its current efforts to reduce OHA’s pending case backlog and improve case-processing times. Over the last decade, SSA attempted to address the growth in OHA’s backlog of pending cases. Between 1985 and 1995, SSA increased field office ALJ staffing levels by 49 percent and support staff by about 45 percent (see table 3.1). From 1990 to 1995, the agency also increased its use of overtime more than 850 percent, from about 74,000 hours to 713,000 hours. Although the number of cases OHA processed annually increased from 246,000 in 1985 to about 527,000 in 1995, the growth in OHA’s workload during that time outpaced its case-processing capacity. In addition to devoting more staff resources and overtime to the backlog crisis, SSA initiated at least three major studies between 1990 and 1992 to identify issues affecting the performance of its disability programs. These reviews resulted in recommendations for improving program efficiency through such actions as standardizing some hearing office procedures, sharing agency “best practices” among offices, and improving access to training and automation for OHA personnel. During our field work, a number of SSA and OHA officials told us that prior agency initiatives were limited in their effectiveness because SSA focused primarily on minor process changes and applying additional resources to the disability program rather than addressing long-standing, systemic problems central to the backlog of cases awaiting processing. Several officials also noted that previous initiatives for improving the appeals process were limited in their effectiveness because SSA was reluctant to mandate ALJ compliance with them. SSA issued its STDP in 1994 in order to make some immediate progress toward reducing OHA’s backlog of pending cases. STDP includes 19 initiatives to expedite the disability determination process and reduce OHA’s pending case workload from its October 1994 level of 488,000 to 375,000 cases by December 1996. The plan’s goals are based primarily on two key initiatives that expand OHA prehearing conferencing proceedings and SSA regional screening unit activities. These initiatives target certain appealed cases for review and possible allowance by OHA attorneys or SSA regional staff before the ALJ hearing stage is reached. STDP relies heavily on the temporary reallocation of program resources to help OHA prepare cases and draft disability decisions. Under the plan, 150 OHA and SSA staff have been detailed to help prepare cases for hearings. Case preparation includes assembling and reorganizing claimant files, date stamping exhibits, and preparing evidence lists. Reallocating resources is intended to ensure that case files are organized in a way that facilitates the processing of disability cases. An additional 150 nonhearing office personnel have also been detailed to help draft hearing decisions. To further improve decision-writing capacity, 800 additional computers have been provided for use by OHA personnel. This influx of computers is intended to reduce OHA’s current dependence on manual processes and support personnel during the preparation of hearing decisions and to limit the movement of documents back and forth between staff for proofreading and editing. The initiative expected to have the greatest impact on reducing OHA’s backlog of cases involves the expansion of OHA prehearing conferencing. However, implementation delays associated with prehearing conferencing have affected SSA’s ability to achieve STDP’s goals. Before STDP, prehearing conferencing involved the review of certain appealed cases by OHA staff attorneys and paralegal specialists in the various OHA regions. These individuals conferred with claimant representatives after reviewing cases, conducted limited case development, and drafted decisions to be reviewed and approved by ALJs. With expanded prehearing conferencing under STDP, OHA attorneys have been given quasi-judicial powers, such as the authority to issue allowance decisions for certain appealed cases without ALJ involvement or approval. Under the initiative, OHA attorneys now engage in extensive development of the case record, conduct conferences with claimant representatives and sources of medical and vocational evidence, and are empowered to issue allowance decisions. If they cannot allow the claim on the basis of their review of the evidence, it is scheduled for hearing before an ALJ. OHA guidelines for prehearing conferencing give 595 senior attorneys the authority to issue allowance decisions. To fully implement the initiative, SSA had to pursue a regulatory change giving OHA staff attorneys the authority to decide certain appealed cases that were formerly limited to ALJ jurisdiction. But the process of defining the specific duties and responsibilities these attorneys would have under STDP was lengthy, and implementation did not begin until July 1995, or almost 6 months after the projected start date. Delays also occurred with other initiatives designed to support prehearing conferencing. For example, backlog reduction goals for prehearing conferencing are partly dependent upon STDP initiatives to provide more computers and staff to assist OHA with case preparation and decision-writing. But due to protracted collective bargaining negotiations with SSA’s union and other difficulties, full implementation of these initiatives was delayed for several months. Through expanded prehearing conferencing, SSA had originally expected to process 98,000 additional cases by December 1995 and 126,000 more by December 1996. However, 1995 goals were not achieved, and senior staff attorneys issued only 22,271 additional allowance decisions nationwide through February 1996. A second major initiative under STDP is intended to further reduce the flow of cases from DDSs to OHA hearing offices by increasing the effectiveness of SSA regional screening units. However, these units have not performed as expected. Before STDP, SSA established screening units in each region to review DDS reconsideration denials. Before an ALJ hearing occurred, screening unit examiners reviewed these cases to determine if an allowance could be made on the basis of evidence in the case file. Although screening unit allowances required ALJ approval, they expedited the decision-making process and prevented many cases from going into OHA’s hearings backlog. Under STDP, OHA staff attorneys have been assigned to all SSA regional screening units to dispose of more appealed claims before they reach an ALJ hearing. The decision to add OHA attorneys was based upon the experiences of SSA’s Boston and New York regional offices, which had tested the use of OHA attorneys in screening units and were obtaining higher allowance rates. According to SSA, the opportunity for screening unit examiners to discuss issues with an attorney gives examiners helpful insight into the intent of the POMS requirements and enables them to reverse incorrect DDS reconsideration denials earlier in the process. Most cases reviewed by the screening units are selected on the basis of computer-generated profiles that identify disability claims likely to be incorrectly denied by DDSs. SSA officials contend that “profiling” minimizes the risk of making incorrect allowances. However, to increase screening unit outputs, the case selection criteria were expanded in January 1995 to include all hearing requests accompanied by any additional evidence, even if the case did not meet the profile. Consequently, screening units are now reviewing some cases that are not necessarily error prone. Screening units, like prehearing conferencing, are not achieving STDP’s allowance goals. Before STDP, existing screening units were expected to allow about 20,000 cases per year. With the introduction of OHA senior attorneys, SSA expected to allow 38,000 cases annually, or about 3,167 cases per month. However, screening units had allowed a total of only 28,376 cases through February 1996. Only two of SSA’s screening units—Boston and New York—are allowing cases at a level that may facilitate reaching STDP’s 1996 goals. SSA officials overseeing the initiative told us that regional differences in allowances were primarily due to the reluctance of some hearing offices to provide sufficient staff and senior attorney support to the regional screening units. Despite slippage in the implementation of STDP’s major initiatives, SSA management has not revised its original backlog reduction goals or the timeframes for accomplishing them. As a result, a number of SSA and OHA personnel involved in the design and implementation of STDP are concerned that the plan may have unintended negative impacts. When STDP was announced in November 1994, it called for reducing OHA’s backlog from 488,000 to 375,000 (113,000 cases) by December 1996. However, by the end of September 1995, OHA’s backlog had increased to 548,000 cases. To achieve the plan’s original goal of reducing pending cases to 375,000, SSA would have to increase its backlog reduction target from the original 113,000 cases to about 173,000 during the remaining timeframe. Many SSA and OHA officials have expressed concern that the growth in OHA’s pending case backlog over the last several months, combined with STDP’s aggressive goals, may create pressure to inappropriately allow cases. As a means of determining STDP’s impact on OHA decision-making, SSA management is closely monitoring and tracking OHA allowance rates. Finally, the prehearing conferencing initiative under STDP has diverted almost 600 attorneys from their regular decision-writing duties. SSA intends to offset this loss in decision-writing resources with 150 temporary detailees from various components and increased overtime for support and professional staff. However, many SSA and OHA officials are concerned that the number of detailees is insufficient to offset the loss of experienced decision-writers. Unlike STDP, the redesign plan includes initiatives that SSA believes will address some of the program’s long-standing problems: multiple levels of claims development and decision-making, fragmented program accountability, and decisional disparities between DDS and OHA adjudicators. In announcing the plan in September 1994, SSA acknowledged that a longer-term strategy was needed to address the systemic problems placing the DI and SSI programs under increasing stress. The agency also noted that, to substantially improve the level of service to claimants, incremental improvements to the process were no longer feasible. At the time of our review, SSA was in the early implementation planning and testing stages of the redesign effort, and none of the initiatives had been fully implemented. To address the problem of multiple levels of claims development and decision-making, the redesign plan includes initiatives to eliminate both DDS reconsideration and Appeals Council reviews. In place of the reconsideration review, SSA plans to establish an Adjudication Officer (AO) position as the focal point for prehearing activities. The AO’s duties will include (1) identifying the specific issues in dispute, (2) determining if additional evidence development is needed to support a claim, (3) reaching agreement with claimants or their representatives on the issues not in dispute, and (4) deciding appealed claims on the basis of the evidence developed. By focusing prehearing responsibilities on a single adjudicator, SSA expects that the time needed to ensure the completeness of the record will be substantially reduced and that more appealed cases will be resolved without ALJ involvement. The redesign plan also includes initiatives that SSA believes will address the problem of fragmented program accountability. To improve overall accountability for claims processing, SSA plans to revise its management information processes to better assess the agency’s service to claimants. Information regarding staff actions at each step of the process is to be made available to all components, and a single measure of time from the claimant’s first point of contact with SSA until final notification of a decision will be developed. SSA has proposed developing or revisiting other measures related to cost, productivity, pending workload, and accuracy to better assess the performance of each participant, and the agency as a whole. The plan also calls for installing a common database for claims control and management purposes, rather than relying on the currently fragmented automated systems. To address organizational fragmentation issues, SSA plans to emphasize accountability and teamwork throughout the disability claim process. At the initial DDS level, a Disability Claims Manager position will be established as the focal point for moving the claim through the earliest stages. For OHA’s prehearing activities, the AO position will be the responsible agent. At the hearing level, the ALJ will be the responsible official. SSA plans to hold these individuals accountable for their part of the disability determination and appeals process and require them to work with other components to ensure timely case processing. The disability redesign plan also includes several initiatives to reduce decisional disparities between DDS and OHA decisionmakers by (1) providing an opportunity for the initial decisionmaker to meet claimants face-to-face, (2) improving SSA’s quality assurance processes, and (3) unifying policy guidance at both the DDS and ALJ levels. Under the redesigned process, claimants will be provided the opportunity to meet with a DDS decisionmaker before the claim is initially decided. This meeting is intended to ensure that all available evidence has been presented and that claimants understand what evidence will be considered in reaching the decision. SSA also plans to improve its quality assurance processes by extending such reviews to all levels of the adjudicatory process and using the results to identify areas for improving agency policies and training. To further ensure consistent standards for decision-making, the redesign plan includes an initiative to develop a single presentation of all substantive policies used in the determination of disability. Both DDS and ALJ adjudicators will be required to follow these same policies. SSA plans to provide policy clarifications and nationwide training to both DDS and ALJ decisionmakers to facilitate the use of the new policies. However, SSA has not proposed any changes to the de novo hearings process and the ability of claimants to submit new medical evidence upon appeal. According to SSA, revising these processes would require a legislative change, which was not within the scope of the plan at its initial stage. SSA’s disability redesign plan includes initiatives that SSA believes will address several of the long-standing problems affecting program performance, but the plan does not specifically address how SSA will consistently define and communicate its management authority over the ALJs. Although APA is an important safeguard of due process, SSA has not consistently defined and communicated to its field staff the actions that can be legally employed by managers to increase program efficiency without hindering judicial independence. For years, SSA has acknowledged the management difficulties associated with the APA issue, and the need to develop specific guidelines of allowed and prohibited practices that are fully understood by everyone involved. More recently, the Director of SSA’s redesign effort again acknowledged that APA procedures and mandates should be better clarified and refined to fit SSA’s mass adjudication approach to its disability programs. SSA’s disability programs have been the subject of numerous internal and external studies over the last 2 decades. Despite these studies and continuing agency efforts to improve the disability determination and appeals process, OHA’s case backlog has reached crisis levels. In an environment of unprecedented disability program growth, SSA has both a short- and a long-term approach to better service its DI and SSI workloads. In the near term, STDP is designed to expedite the disability appeals process and reduce OHA’s pending case backlog to a manageable level. In developing its long-term Plan for a New Disability Claim Process, SSA has also acknowledged the need for the agency to move ahead with more dramatic program changes. Considering the current backlog crisis at OHA, STDP’s approach for temporarily reducing OHA backlogs is reasonable in that it establishes specific goals and timeframes for reducing OHA backlogs. It also represents an SSA-wide commitment involving the reallocation of resources from both within and outside OHA, and coordination and cooperation among all organizational components involved in the adjudication process. Disposing of cases earlier in the decisional process may also be less costly and time consuming than allowing them to reach the ALJ hearing stage. Although backlog reduction efforts are receiving greater agencywide emphasis under STDP, implementation delays associated with prehearing conferencing and the limited impact of regional screening have adversely affected SSA’s ability to achieve the plan’s backlog reduction goals. Many OHA and SSA staff are also concerned that the continued growth in OHA’s pending case backlog and SSA’s reluctance to adjust the plan’s goals may affect the quality of decisions and lead to increased pressure to inappropriately award cases. To ensure decisional accuracy, SSA intends to monitor the quality of STDP decisions and the overall allowance rate for its disability programs. The agency’s reliance on computer-generated profiles to select certain error-prone cases for review under STDP is also intended to reduce the risk of inappropriate decisions. However, the screening unit case selection criteria have been expanded to include some nonprofiled cases, and prehearing conferencing regulations do not preclude OHA senior attorneys from reviewing nonprofiled cases in the future. Although the redesign plan includes initiatives that SSA believes will address several long-standing program problems, it does not specifically address the need to consistently define and communicate the types of management actions SSA can legally employ to better manage ALJ activities. For years, SSA has recognized that ALJ management issues underlie many of the problems affecting its disability programs, and that it should better define and thoroughly communicate a consistent APA message to field staff. We believe that addressing the APA issue will be a challenge for SSA. However, it is a challenge that must be overcome if SSA is to resolve the current disability backlog crisis and achieve its long-term service delivery goals. In providing comments on this report, SSA identified a number of actions that it has taken since 1992 to streamline and expedite the processing of hearing workloads. These actions include developing a plan to standardize disability claim file preparation, creating a Practices and Procedures Exchange Workgroup within OHA, suspending the preparation of medical summaries, standardizing decision-writing instructions, and encouraging ALJs to write some of their own decisions. However, the agency did not provide data regarding the impact of these initiatives on reducing OHA’s backlog of pending cases. Our data show that, despite SSA’s efforts, OHA’s backlog has continued to grow since 1992. In regard to STDP, SSA acknowledged that, because of increases in hearing receipt projections and the pace of STDP implementation, the plan’s backlog reduction goals would not be met by December 1996. However, SSA officials stated that a shortfall in screening unit allowances would have only a limited impact on meeting STDP’s overall goals. We disagree with SSA’s assessment, since SSA originally intended that the screening units would be second only to prehearing conferencing in terms of impact and would result in 38,000 additional allowances through December 1996. Not meeting screening unit allowance targets will, in our opinion, hinder OHA’s backlog reduction efforts. SSA did not provide us with revised backlog reduction goals for STDP or any documentation indicating that they would be changed in the near future. Regarding our concerns that pressure to meet STDP’s goals may have unintended effects, SSA has advised its adjudicators that STDP should not be interpreted to inappropriately allow cases. SSA also noted that through the deployment of resources not previously devoted to hearing office workloads, the decision-writing pending workload has been reduced from over 43,000 to about 28,000 cases. Finally, SSA agreed that clarifying the scope of its authority over ALJs under APA would be appropriate and stated it is developing such a document. However, the agency questioned the statement in our report that many ALJs believe they are exempt from nearly all management control. This was not our conclusion, but one that was reported by SSA’s Office of Workforce Analysis following its 1992 review of OHA operations. SSA also disagreed with our statement that ALJs have successfully opposed agency productivity initiatives. However, ALJ opposition to agency initiatives to improve productivity has been documented in prior SSA and GAO reviews and through field work conducted during this assignment. The full text of SSA’s comments and our response are included in appendix III. | Pursuant to a congressional request, GAO examined the growth in the backlog of pending cases at the Social Security Administration's (SSA) Office of Hearings and Appeals (OHA), focusing on SSA initiatives to: (1) reduce backlogged cases; and (2) make the disability appeals process more timely and efficient. GAO found that: (1) the growth in OHA backlogs is a direct result of increased applications and appeals to OHA, as well as SSA inattention to long-standing problems; (2) these problems include multiple levels of claims development and decisionmaking, fragmented program accountability, decisional disparities between disability determination services and OHA adjudicators, and SSA failure to communicate its management authority over administrative law judges (ALJ); (3) SSA initiated short-and long-term efforts to manage its disability determination and appeals process in 1994; (4) the SSA Short-Term Disability Plan (STDP) should reduce OHA backlogs to a manageable level by December 1996; (5) STDP relies on the temporary reallocation of SSA resources and process changes to stem the flow of cases requiring ALJ hearings; (6) start-up delays and limited timeframes have affected SSA ability to reduce the number of backlogged cases; (7) SSA tracks and monitors STDP allowances to ensure decisional accuracy; (8) the SSA redesign plan is aimed at addressing systemic problems within the SSA disability program and reducing claims processing; (9) the redesign plan is still in its early stages, and does not address the types of management actions that are legally permissible for ALJ hearings; and (10) many ALJ believe that they are legally exempt from management control, and SSA is frustrated in its efforts to manage the appeals process and reduce the number of pending cases. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Uruguay Round and NAFTA included significant provisions to liberalize agricultural trade. Generally, these agreements comprised commitments for reducing government support, improving market access, and establishing for the first time rules on various aspects of global agricultural trade. As the largest exporter of agricultural commodities in the world, the United States was expected to benefit substantially from implementation of the reforms embodied in these agreements. The Uruguay Round represented the first time that GATT member countries established disciplines concerning international agricultural trade. The Uruguay Round agreements, including those on agriculture and SPS, included several key measures to liberalize agricultural trade. First, generally over a 6-year period beginning in 1995, member countries were required to make specific reductions in three types of support to agricultural producers: (1) import restrictions, (2) export subsidies, and (3) internal support. Second, member countries concluded an Agreement on the Application of Sanitary and Phytosanitary Measures that established guidelines on the use of import regulations to protect human, animal, and plant life and health. Third, countries established a Committee on Agriculture that would oversee implementation of WTO member countries’ commitments to reduce agricultural support and provide a forum for discussions on agricultural trade policies. Fourth, the Round provided a definition of STEs and implemented procedural measures designed to improve compliance with GATT rules. Finally, member countries agreed to enter a second phase of negotiations to further liberalize agricultural trade beginning in 1999. Under NAFTA, the three member countries—Canada, Mexico, and the United States—agreed to eliminate all tariffs on agricultural trade. Some of these tariffs were to be eliminated immediately; others would be phased out over a 5-, 10- or 15-year period. NAFTA also required the immediate elimination of all nontariff trade barriers, such as import restrictions, generally through their conversion either to tariff-rate quotas or tariffs. For example, Mexico’s import licensing requirements for bulk commodities, such as wheat, were terminated under NAFTA. In addition, the NAFTA charter’s chapter on agriculture included provisions on SPS. NAFTA also established a joint committee on agricultural trade and a committee on SPS measures, providing a channel for discussion of member countries’ ongoing concerns, in an effort to head off disputes. While forecasters have estimated that increases in agricultural trade would account for a sizable portion of the Uruguay Round and NAFTA accords’ projected benefits to the United States, challenges exist for ensuring their full implementation. In particular, our work on foreign SPS measures and STEs illustrates the complexity of the implementation challenges, particularly in organizing U.S. government efforts to assure effective enforcement and monitoring of member nations’ agricultural commitments under both agreements. For example, The U.S. Trade Representative (USTR) has found that as trade agreements begin to reduce tariffs on agricultural commodities, the United States must guard against the increasing use of SPS measures as the trade barrier of choice. The WTO Agreement on the Application of Sanitary and Phytosanitary Measures, and chapter 7 of NAFTA, established guidelines regarding the appropriate use of SPS measures in relation to trade. While these agreements are not identical, they are consistent in their guiding principles and rules. Both agreements recognize the right of countries to maintain SPS measures but stipulate that such measures (1) must not be applied arbitrarily or constitute a disguised restriction on trade and (2) must be based on scientific principles and an assessment of risk. In addition, the WTO and NAFTA agreements provided dispute settlement procedures to help resolve disagreements between member countries on SPS measures, including consultations and review by a dispute settlement panel. The WTO agreement also encourages progress toward achieving three objectives: (1) broad harmonization of SPS measures through greater use of international standards (harmonization), (2) recognition among members that their SPS measures may differ but still be considered “equivalent” provided they achieve the same level of protection (equivalency), and (3) adaptation of SPS measures to recognize pest- and disease-free regions (regionalization). Our work suggests open issues in the following areas: the lack of coordination of U.S. government efforts to address foreign SPS measures; the adequacy of the USDA’s process for balancing its regulatory and trade facilitation roles and responsibilities; and the potential benefits from WTO member countries’ progress toward achieving the longer-term objectives concerning harmonization, equivalency, and regionalization. Although USTR has identified some foreign SPS measures as key barriers to U.S agricultural exports, our recent report to Congress found several weaknesses in the federal government’s approach to identifying and addressing such measures. Because of these weaknesses, the federal government cannot be assured that it is adequately monitoring other countries’ compliance with the WTO or NAFTA SPS provisions and effectively protecting the interests of U.S. agricultural exporters. Specifically, we found that the federal structure for addressing SPS measures is complex and involves multiple entities. USTR and USDA have primary responsibility for addressing agricultural trade issues, and they receive technical support from the Food and Drug Administration (FDA), the Environmental Protection Agency (EPA), and the Department of State. Our review demonstrated that the specific roles and responsibilities of individual agencies within this complex structure are unclear and that effective leadership of their efforts has been lacking. During our review, USTR and USDA implemented certain mechanisms to improve their handling of SPS issues, but the scope of these mechanisms did not encompass the overall federal effort. In addition, we found that the various agencies’ efforts to address foreign SPS measures have been poorly coordinated and they have had difficulty determining priorities for federal efforts or developing unified strategies to address individual measures. Finally, we found that goals and objectives to guide the federal approach and measure its success had not been developed. We believe that a more organized, integrated, strategic federal approach for addressing such measures would be beneficial. Therefore, we recommended that USTR, USDA, and the other concerned agencies, such as FDA and EPA, work together to develop coordinated goals, objectives, and performance measurements for federal efforts to address foreign SPS measures. Outstanding questions derived from our work include the following: What steps have USTR and USDA taken to address the weaknesses found by our study, such as the lack of a process to prioritize federal efforts to address foreign SPS measures? How do USTR and USDA plan to improve coordination of their activities to address SPS measures? How do USTR and USDA plan to work more closely with other relevant agencies, such as FDA and EPA, in determining which SPS measures to address and how to address them? Specifically, at the executive branch level how does the administration intend to balance its trade facilitation and regulatory roles and responsibilities? Absent a coordinated approach for addressing foreign SPS measures, the specific role of USDA regulatory and research agencies in resolving SPS has not been clearly defined. Some of these regulatory agencies, such as the Animal and Plant Health Inspection Service and the Food Safety Inspection Service, whose primary responsibilities are to safeguard human, animal, and plant life or health, have increasingly assumed a role in efforts to facilitate trade. Several trade authorities and industry officials have expressed frustration that these regulatory agencies (1) seem to lack a sense of urgency regarding trade matters and (2) are sometimes willing to engage in technical discussions regarding foreign SPS measures for many months and even years. These groups expressed concerns that regulatory authorities lack negotiating expertise, which sometimes undermined efforts to obtain the most advantageous result for U.S. industry regarding foreign SPS measures. U.S. regulatory officials, in turn, believe that at times trade authorities and industry groups fail to appreciate that deliberate, and sometimes lengthy, technical and scientific processes are necessary to adequately address foreign regulators’ concerns about the safety of U.S. products. Government and industry officials have stated that regulatory and research agencies’ responsibilities for dealing with foreign SPS measures have not been clearly defined. The tension in balancing the regulatory and trade facilitation activities of some USDA agencies underlines the need to more clearly define their role in addressing SPS measures. Questions resulting from our work include the following: What steps has USDA taken to use its strategic planning process for integrating disparate agency efforts to address SPS measures? What progress is USDA making in using the Working Group on Agricultural Trade Policy to strengthen USDA’s SPS efforts? Has this initiative, or any other, begun to deal with the tensions that have arisen over the dual roles of some USDA agencies as both regulatory and trade facilitation entities? Has USDA provided guidance to regulatory agency officials to assist in promoting a more consistent effort to balance their competing goals and policies? Is there outreach to agricultural producers to clarify the new roles that increased foreign trade has required these regulatory agencies to adopt? WTO and USTR officials suggest that member countries appear to have focused on implementing provisions of the SPS agreement that enable them to resolve SPS disputes as they arise, such as the requirement that SPS measures be based on scientific evidence, but have paid less attention to other key provisions. Specifically, member countries have been less concerned with provisions regarding harmonization, equivalency, and regionalization of SPS measures. The practices these principles encourage are not currently widespread. Progress in implementing harmonization, equivalency, and regionalization could be time consuming. For example, the United States and the European Union negotiated for 3 years before reaching a partial agreement about the equivalence of their respective inspection systems for animal products. Nevertheless, these provisions could help minimize trade disputes in the long run by creating a more structured approach to SPS measures. Our work raises the following questions regarding the SPS agreement’s long-term objectives: Is there a sufficient balance in efforts to implement the Uruguay Round SPS agreement so as to promote the goals of harmonization, equivalency, and regionalization as envisioned in the framework of the agreement? What factors limit cooperation among WTO member countries in pursuit of these three long-term objectives? How are USDA and USTR working to promote international harmonization of SPS measures based on U.S. standards that would facilitate U.S. industry access to foreign agricultural and agriculture-related markets? The agricultural and SPS agreements of the Uruguay Round were intended to move member nations toward establishing a market-oriented agricultural trading system by minimizing government involvement in regulating agricultural markets. Some member nations continue to use STEs to regulate imports and/or exports of selected products. For example, STEs have long been important players in the international wheat and dairy trade. As a result of the Uruguay Round, the WTO officially defined STEs and addressed procedural weaknesses of GATT’s article XVII by improving the process for obtaining and reviewing information. In the past, GATT required that STEs (1) act in a manner consistent with the principles of nondiscriminatory treatment, (2) make purchases and/or sales in accordance with commercial considerations that allow foreign enterprises an opportunity to compete, and (3) notify the WTO secretariat about their STEs’ activities (for example, WTO members who have STEs are required to report information on their operations). Subsequently, the Uruguay Round established an STE working party which is now incorporated into the WTO framework. In addition, STEs that engage in agricultural trade are also subject to the provisions in the Uruguay Round Agreement on Agriculture, that define market access restrictions, export subsidies, and internal support. Our work suggests open issues in two areas: (1) a lack of transparency in STE pricing practices and (2) the extent of U.S. efforts to address STEs. In the absence of complete and transparent information on the activities of STEs, member countries are hindered in determining whether STEs operate in accordance with GATT disciplines and whether STEs have a trade-distorting effect on the global market. In 1995, we reported that compliance with the Uruguay Round STE reporting requirements or notifications had been poor. Since then, STE notifications to the WTO have improved, including reporting by countries with major agricultural STEs. However, because they are not required to do so, none of the notifying STE countries have reported transactional pricing practices—information that could provide greater transparency about their operations. U.S. agricultural producers continue to express concern over the lack of transparency in STE pricing practices and their impact on global free trade. In 1996, we reported that our effort to fully evaluate the potential trade-distorting activities of STEs, including pricing advantages, could not be conducted because of a lack of transaction-level data. Without this data and the more transparent system it would create, the United States finds it difficult to assess the trade-distorting effects of, and compliance with, WTO rules governing reporting on STE operations. Our work on STEs raises the following questions with regard to the lack of transparency: What progress has the WTO working party on state trading enterprises made in studying STEs and improving the information available about their activities? What steps, if any, can be taken within the WTO framework, or otherwise, to increase the pricing transparency of import- and export-oriented STEs? U.S. agricultural interests have expressed concern regarding the potential of STEs to distort trade, and USDA officials have said that a focused U.S. effort to address STEs is vitally important. Although, under the WTO, STEs are recognized as legitimate trading entities subject to GATT rules, some U.S. agricultural producers and others are concerned that STEs, through their monopoly powers and government support, may have the ability to manipulate worldwide trade in their respective commodities. For example, some trade experts and some WTO member countries are concerned about STEs’ potential to distort trade due to their role as both market regulator and market participant. Further, the U.S. agricultural sector competes with several prominent export STEs in countries such as Canada, Australia, and New Zealand and import STEs in other countries such as Japan. Questions from our work regarding the U.S. effort to address STEs include the following: How are USTR and USDA monitoring STEs worldwide to ensure that member countries are meeting their WTO commitments? Given the limited transparency resulting from STE notifications to the WTO, how can the United States be assured that STEs are not being operated in a way that circumvents other WTO agriculture commitments, such as the prohibition on export subsidies or import targets? Mr. Chairman and members of the Subcommittee, this concludes my statement for the record. Thank you for permitting me to provide you with this information. Agricultural Exports: U.S. Needs a More Integrated Approach to Address Sanitary/Phytosanitary Issues (GAO/NSIAD-98-32, Dec. 11, 1997). Assistance Available to U.S. Agricultural Producers Under U.S. Trade Law (GAO/NSIAD-98-49R, Oct. 20, 1997). North American Free Trade Agreement: Impacts and Implementation (GAO/T-NSIAD-97-256, Sept. 11, 1997). U.S. Agricultural Exports: Strong Growth Likely, but U.S. Export Assistance Programs’ Contribution Uncertain (GAO/NSIAD-97-260, Sept. 30, 1997). World Trade Organization: Observations on the Ministerial Meeting in Singapore (GAO/T-NSIAD-97-92, Feb. 26, 1997). International Trade: The World Trade Organization’s Ministerial Meeting in Singapore (GAO/T-NSIAD-96-243, Sept. 27, 1996). Canada, Australia, and New Zealand: Potential Ability of Agricultural State Trading Enterprises to Distort Trade (GAO/NSIAD-96-94, June 24, 1996). International Trade: Implementation Issues Concerning the World Trade Organization (GAO/T-NSIAD-96-122, Mar. 13, 1996). State Trading Enterprises: Compliance With the General Agreement on Tariffs and Trade (GAO/GGD-95-208, Aug. 30, 1995). Correspondence Regarding State Trading Enterprises (GAO/OGC-95-24, July 28, 1995). The General Agreement on Tariffs and Trade: Uruguay Round Final Act Should Produce Overall U.S. Economic Gains (GAO/GGD-94-83A&B, July 29, 1994). General Agreement on Tariffs and Trade: Agriculture Department’s Projected Benefits Are Subject to Some Uncertainty (GAO/GGD/RCED-94-272, July 22, 1994). North American Free Trade Agreement: Assessment of Major Issues (GAO/GGD-93-137, Sept. 9, 1993) (two vols.). CFTA/NAFTA: Agricultural Safeguards (GAO/GGD-93-14R, Mar. 18, 1993). International Trade: Canada and Australia Rely Heavily on Wheat Boards to Market Grains (GAO/NSIAD-92-129, June 10, 1992). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the implementation of certain agricultural provisions of the Uruguay Round of the General Agreement on Tariffs and Trade (GATT) and North American Free Trade Agreement (NAFTA), focusing on: (1) the impact of measures to protect human, animal or plant life or health--referred to as sanitary and phytosantiary (SPS) measures; and (2) state trading enterprises (STEs). GAO noted that: (1) the Uruguay Round and NAFTA included significant provisions to liberalize agricultural trade; (2) while forecasters have estimated that increases in agricultural trade would account for a sizeable portion of the Uruguay Round and NAFTA agreements' projected benefits to the United States, challenges exist for ensuring their full implementation; (3) the World Trade Organization's (WTO) agreement on the application of sanitary and phytosanitiary measures, and chapter 7 of NAFTA, established guidelines regarding the appropriate use of SPS measures in relation to trade; (4) although the United States Trade Representative (USTR) has identified some foreign SPS measures as key barriers to U.S. agricultural exports, GAO's recent report to Congress found several weaknesses in the federal government's approach to identifying and addressing such measures; (5) because of these weaknesses, the federal government cannot be assured that it is adequately monitoring other countries' compliance with the WTO or NAFTA SPS provisions and effectively protecting the interests of U.S. agricultural exporters; (6) USTR and the Department of Agriculture (USDA) have primary responsibility for addressing agricultural trade issues, and they receive technical support from the Food and Drug Administration (FDA), the Environmental Protection Agency and the Department of State; (7) absent a coordinated approach for addressing foreign SPS measures, the specific role of USDA regulatory and research agencies in resolving SPS has not been clearly defined; (8) WTO and USTR officials suggest that member countries appear to have focused on implementing provisions of the SPS agreement that enable them to resolve SPS disputes as they arise; (9) the agricultural and SPS agreements of the Uruguay Round were intended to move member nations toward establishing a market-oriented agricultural trading system by minimizing government involvement in regulating agricultural markets; (10) as a result of the Uruguay Round, the WTO officially defined STEs and addressed procedural weaknesses of article XVII by improving the process for obtaining and reviewing information; (11) in the absence of complete and transparent information on the activities of STEs, member countries are hindered in determining whether STEs operate in accordance with GATT disciplines and whether they have a trade-distorting effect on the global market; and (12) U.S. agriculture interests have expressed concern regarding the potential of STEs to distort trade, and USDA officials have said that a focused U.S. effort to address STEs is vitally important. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Congress passed WIA partly in response to concerns about fragmentation and inefficiencies in federal employment and training programs. WIA authorized several employment and training programs—including Job Corps and programs for Native Americans, migrant and seasonal farmworkers, and veterans—as well as the Adult Education and Literacy program. WIA replaced the Job Training Partnership Act (JTPA) programs for economically disadvantaged adults and youths and dislocated workers with three new programs—WIA Adult, WIA Dislocated Worker, and WIA Youth. These programs provide a range of services, including occupational training and job search assistance. Beyond authorizing these programs, WIA also required the establishment of one- stop centers in all local areas and mandated that many federal employment and training programs, including the ES and WIA Adult programs, provide services through the centers. Under WIA, sixteen different categories of programs, administered by four federal agencies, must provide services through the one-stop system, according to Labor officials. Thirteen of these categories include programs that meet our definition of an employment and training program, and three categories do not, but offer other services to jobseekers who need them (see fig. 1). These 13 program categories represent about 40 percent of the federal appropriations for employment and training programs in fiscal year 2010. One-stop centers serve as the key access point for a range of services that help unemployed workers re-enter the workforce—including job search assistance, skill assessment and case management, occupational skills and on-the-job training, basic education and literacy training, as well as access to Unemployment Insurance (UI) benefits and other supportive services— and they also assist employers in finding workers. Any person visiting a one-stop center may look for a job, receive career development services, and gain access to a range of vocational education programs. In our 2007 study, we found that a typical one-stop center in many states offered services for eight or nine required programs on-site. In addition to required programs, one-stop centers have the flexibility to include other, optional programs in the one-stop system, such as the Temporary Assistance for Needy Families (TANF) Program, the Supplemental Nutrition Assistance Program (SNAP) Employment and Training Program, or other community-based programs, which help them better meet specific state and local workforce development needs. The Dayton, Ohio, one-stop center, for example, boasts over 40 programs on- site at the 8-1/2 acre facility, including an organization that provides free business attire to job seekers who need it, an alternative high school program that assists students in obtaining a diploma, and organizations providing parenting and self-sufficiency classes. Nationwide, services may also be provided at affiliated sites—designated locations that provide access to at least one employment and training program. While WIA requires certain programs to provide services through the one- stop system, it does not provide additional funds to operate one-stop systems and support one-stop infrastructure. As a result, required programs are expected to share the costs of developing and operating one- stop centers. In 2007, we reported that WIA programs and the ES program were the largest funding sources states used to support the infrastructure—or nonpersonnel costs—of their comprehensive one-stop centers. To help cover operational costs and expand services, some one- stop centers that we visited for a study of promising practices raised additional funds to support the infrastructure through fee-based services, grants, or contributions from partner programs and state or local governments. For example, one-stop operators in Clarksville, Tennessee, reported that they raised $750,000 in one year through a combination of business consulting, drug testing, and drivers’ education services. In addition, the one-stop center in Kansas City, Missouri, had a full-time staff person dedicated to researching and applying for grants. The one-stop generated two-thirds of an entire program year’s operating budget of $21 million through competitive grants available from the federal government as well as from private foundations. One-stop centers required under WIA provide an opportunity for a broad array of federal employment and training programs—both required and optional programs—to coordinate their services and avoid duplication. Although WIA does not require that programs be colocated within the one- stop center, this is one option that programs may use to provide services within the one-stop structure. Labor’s policy is to encourage colocation of all required programs to the extent possible; however, officials acknowledged that colocation is one of multiple means for achieving service integration. We have previously reported that colocating services can result in improved communication among programs, improved delivery of services for clients, and elimination of duplication. While colocating services does not guarantee efficiency improvements, it affords the potential for sharing resources and cross-training staff, and may lead, in some cases, to the consolidation of administrative systems, such as information technology systems. Our early study of promising one-stop practices found that the centers nominated as exemplary did just that— they cross-trained program staff, consolidated case management and intake procedures across multiple programs, and developed shared data systems. More broadly, these promising practices streamline services for job seekers, engage the employer community, and build a solid one-stop infrastructure. Other types of linkages between programs, such as electronic linkages or referrals, may not result in the same types of efficiency improvements, but they may still present opportunities to streamline services. Although the potential benefits of colocation are recognized, implementation may pose challenges. WIA Adult and the Employment Service are generally colocated in one-stop centers, but TANF employment and training services are colocated in one-stops to a lesser extent. In our 2007 report, we found that 30 states provided the TANF program on site at a typical comprehensive one-stop center. These states accounted for 57 percent of the comprehensive one-stop centers nationwide. Some previous efforts to reauthorize WIA have included proposals to make TANF a mandatory one-stop partner. Increasing colocation, however, could prove difficult due to issues such as limited available office space, differences in client needs and the programs’ client service philosophies, and the need for programs to help fund the operating costs of the one-stop centers. HHS officials noted, that when TANF employment and training services are not colocated in one-stop centers, they are typically colocated with other services for low-income families, such as SNAP, formerly known as the Food Stamp Program, and Medicaid. Officials acknowledged that colocating TANF employment and training services in one-stop centers may mean that they are no longer colocated with these other services, although Florida, Texas, and Utah provide SNAP services through one-stops along with TANF services, and Utah also provides Medicaid through one-stops. Officials said that in states where this is not the case, the potential trade-off would need to be considered. Given that the purpose of WIA, in part, was to transform the fragmented employment and training system into a coherent one, our work suggests that greater efficiencies could be achieved. Three of the largest employment and training programs, the TANF, ES, and WIA Adult programs, provide some of the same employment and training services to low-income individuals, despite differences between the programs (see fig. 2). While the TANF program serves low-income families with children, the ES and WIA Adult programs serve all adults, including low-income individuals. Specifically, the WIA Adult program gives priority for intensive and training services to recipients of public assistance and other low- income individuals when program funds are limited. All three programs share a common goal of helping individuals secure employment, and the TANF and WIA Adult programs also aim to reduce welfare dependency. However, employment is only one aspect of the TANF program, which has other broad social service goals, and as a result, TANF provides a wide range of other services beyond employment and training, including cash assistance. Jo search workshops. Susidized employment. The TANF, ES, and WIA Adult programs maintain separate administrative structures to provide some of the same services to low-income individuals. At the federal level, the TANF program is administered by HHS, and the ES and WIA Adult programs are administered by Labor. At the state level, the TANF program is typically administered by the state human services or welfare agency, and the ES and WIA Adult programs are typically administered by the state workforce agency. By regulation, ES services must be provided by state employees. At the local level, WIA regulations require at least one comprehensive one-stop center to be located in every local workforce investment area. These areas may have the same boundaries as counties, may be multicounty, or may be within or across county lines. Similarly, every county typically has a TANF office. TANF employment and training services may be delivered at TANF offices, in one-stop centers, or through contracts with for-profit or nonprofit organizations, according to HHS officials. In one-stop centers, ES staff provide job search and other services to ES customers, while WIA staff provide job search and other services to WIA Adult customers. Florida, Texas, and Utah have consolidated the state workforce and welfare agencies that administer the TANF, ES, and WIA Adult programs, among other programs. In Utah, the workforce agency administers the TANF program in its entirety. In Florida and Texas, the workforce agencies administer only that part of TANF related to employment and training services. In all three states, the one-stop centers serve as portals to a range of social services, including TANF. Officials from these three states told us that consolidating agencies led to cost savings through the reduction of staff and facilities. For example, a Utah official said that the state reduced the number of buildings in which employment and training services were provided from 104 to 34. According to a Texas official, Texas also privatized 3,000 full-time staff equivalents (FTE) at the local level, which reduced the pension, retirement, and insurance costs that had previously been associated with these state positions. Officials in the three states, however, could not provide a dollar figure for the cost savings that resulted from consolidation. State officials also told us that consolidation improved the quality of services for participants in the WIA Adult and TANF programs. An official in Utah noted the consolidation allowed job seekers to apply for assistance they had not considered in the past, allowed employment counselors to cluster services that made sense for the client, and allowed clients to experience seamless service delivery. These benefits reflected what the official said was one of the visions of consolidation: having one employment plan per client, rather than multiple employment plans for clients served by multiple programs. While Florida officials acknowledged that a subset of TANF clients have significant barriers to employment— such as mental health issues—that one-stop centers may not be well equipped to address, officials said that the one-stops in their state are able to address the employment and training needs of the majority of TANF clients. When asked about the quality of the TANF and workforce programs in Florida, Texas, and Utah, Labor officials were not aware of any performance problems in these programs and added that they view all three states as forerunners in program improvement efforts. That said, they noted that Utah may not be representative of other states, due to its relatively small and homogenous population. In addition, officials from the Center for Law and Social Policy (CLASP) said that Texas and Florida may place more of an emphasis on quickly finding work for TANF clients than other states. Even with the benefits identified by state officials, consolidation may have its challenges. An official in Utah noted that the reorganization of state agencies and staff was time-consuming and costly, and it took several years before any cost savings were realized. For example, developing a shared database across programs increased costs temporarily. In addition, when states consolidate their agencies, they must still adhere to separate program requirements for TANF and WIA. A 2004 article on service integration by authors from CLASP and the Hudson Institute concluded that options were available for states to make significant progress in integrating TANF and WIA services, but it also noted the difficulty in administering separate programs with different requirements. The article specifically noted differences in work requirements, program performance measures, and reporting requirements, among others. A Utah official said that it was important for program administrators to be knowledgeable about these separate reporting requirements and processes across the multiple federal agencies that oversee these programs. Similarly, this official said that direct service staff needed to be knowledgeable about multiple programs and how to allocate costs across these programs. For states that have not consolidated their workforce and welfare agencies, not knowing what actions are allowable under the law may present a challenge to consolidation. According to the article on service integration, states face some legal barriers to fully integrating TANF and WIA services, but if they do not know what is allowable under the law, they may not always exercise the full range of options available to them. In conclusion, understanding how well the one-stop system is reducing fragmentation through coordinated service delivery would be useful in deciding where efficiencies could be achieved, but no study has been undertaken to evaluate the effectiveness of the one-stop system approach. While a few program impact studies have been done or are underway, these studies largely take a program-by-program approach rather than focusing on understanding which approaches are most effective in streamlining service delivery and improving one-stop efficiency. In addition, Labor’s efforts to collaborate with other agencies to assess the effects of different strategies to integrate job-seeker services have been limited. We previously recommended that Labor collaborate with Education, HHS, and the Department of Housing and Urban Development (HUD) to develop a research agenda that examines the impacts of various approaches to program integration on job seeker and employer satisfaction and outcomes. Labor has committed to collaborating with other agencies and has involved them in developing inter-agency initiatives for certain targeted activities, but has not yet evaluated the effectiveness of the one-stop system. While states and localities have undertaken some potentially promising initiatives to achieve greater administrative efficiencies, little information is available about the strategies and results of these initiatives; therefore, it is unclear the extent to which practices in these states could serve as models for others. Moreover, little is known about the incentives states and localities have to undertake such initiatives and whether additional incentives may be needed. We recently recommended that the Secretaries of Labor and HHS work together to develop and disseminate information that could inform such efforts, including information on state initiatives to consolidate program administrative structures and state and local efforts to colocate additional programs at one-stop centers. As part of this effort, we recommended that Labor and HHS examine the incentives for states and localities to undertake such initiatives and, as warranted, identify options for increasing them. In their responses, Labor and HHS agreed with our recommendations. However, HHS noted that it lacks legal authority to mandate increased TANF-WIA coordination or to create incentives for such efforts. Increasing efficiencies among federal employment and training programs is clearly challenging. These are difficult issues to address because they may require agencies and Congress to re-examine within and across various mission areas the fundamental structure, operation, funding, and performance of a number of long-standing federal programs and activities. As the nation rises to meet its current fiscal challenges, GAO will continue to assist Congress and federal agencies in identifying actions needed to address these issues. Chairwoman Foxx, Ranking Member Hinojosa, and Members of the Subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For further information regarding this testimony, please contact me at (202) 512-7215 or [email protected]. Individuals making key contributions to this testimony include Dianne Blank, Pamela Davidson, Patrick Dibattista, Alex Galuten, Jennifer Gregory, Isabella Johnson, and Sheila McCoy. Employment and Training Programs: Opportunities Exist for Improving Efficiency. GAO-11-506T. Washington, D.C.: April 7, 2011. Opportunities to Reduce Fragmentation, Overlap, and Potential Duplication in Federal Teacher Quality and Employment and Training Programs. GAO-11-509T. Washington, D.C.: April 6, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-441T. Washington, D.C.: March 3, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. Multiple Employment and Training Programs: Providing Information on Colocating Services and Consolidating Administrative Structures Could Promote Efficiencies. GAO-11-92. Washington, D.C.: January 13, 2011. Workforce Investment Act: Labor Has Made Progress in Addressing Areas of Concern, but More Focus Needed on Understanding What Works and What Doesn’t. GAO-09-396T. Washington, D.C.: February 26, 2009. Workforce Development: Community Colleges and One-Stop Centers Collaborate to Meet 21st Century Workforce Needs. GAO-08-547. Washington, D.C.: May 15, 2008. Workforce Investment Act: One-Stop System Infrastructure Continues to Evolve, but Labor Should Take Action to Require That All Employment Service Offices Are Part of the System. GAO-07-1096. Washington, D.C.: September 4, 2007. Workforce Investment Act: Additional Actions Would Further Improve the Workforce System. GAO-07-1051T. Washington, D.C.: June 28, 2007. Workforce Investment Act: Substantial Funds Are Used for Training, but Little Is Known Nationally about Training Outcomes. GAO-05-650. Washington, D.C.: June 29, 2005. Workforce Investment Act: States and Local Areas Have Developed Strategies to Assess Performance, but Labor Could Do More to Help. GAO-04-657. Washington, D.C.: June 1, 2004. Workforce Investment Act: Labor Actions Can Help States Improve Quality of Performance Outcome Data and Delivery of Youth Services. GAO-04-308. Washington, D.C.: February 23, 2004. Workforce Investment Act: One-Stop Centers Implemented Strategies to Strengthen Services and Partnerships, but More Research and Information Sharing Is Needed. GAO-03-725. Washington, D.C.: June 18, 2003. Multiple Employment and Training Programs: Funding and Performance Measures for Major Programs. GAO-03-589. Washington, D.C.: April 18, 2003. Workforce Investment Act: States’ Spending Is on Track, but Better Guidance Would Improve Financial Reporting. GAO-03-239. Washington, D.C.: November 22, 2002. Workforce Investment Act: Better Guidance and Revised Funding Formula Would Enhance Dislocated Worker Program. GAO-02-274. Washington, D.C.: February 11, 2002. Multiple Employment and Training Programs: Overlapping Programs Indicate Need for Closer Examination of Structure. GAO-01-71. Washington, D.C.: October 13, 2000. Workforce Investment Act: Implementation Status and the Integration of TANF Services. GAO/T-HEHS-00-145. Washington, D.C.: June 29, 2000. Multiple Employment Training Programs: Information Crosswalk on 163 Employment Training Programs. GAO/HEHS-95-85FS. Washington, D.C.: February 14, 1995. Multiple Employment Training Programs: Major Overhaul Needed to Reduce Costs, Streamline the Bureaucracy, and Improve Results. GAO/T-HEHS-95-53. Washington, D.C.: January 10, 1995. Multiple Employment Training Programs: Overlap Among Programs Raises Questions About Efficiency. GAO/HEHS-94-193. Washington, D.C.: July 11, 1994. Multiple Employment Training Programs: Conflicting Requirements Underscore Need for Change. GAO/T-HEHS-94-120. Washington, D.C.: March 10, 1994. Multiple Employment and Training Programs: Major Overhaul is Needed. GAO/T-HEHS-94-109. Washington, D.C.: March 3, 1994. Multiple Employment Training Programs: Overlapping Programs Can Add Unnecessary Administrative Costs. GAO/HEHS-94-80. Washington, D.C.: January 28, 1994. Multiple Employment Training Programs: Conflicting Requirements Hamper Delivery of Services. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | This testimony discusses the findings from our recent work on federal employment and training programs and our prior work on the Workforce Investment Act of 1998 (WIA). GAO has recently identified 47 federally-funded employment and training programs for fiscal year 2009, defining them as programs that are specifically designed to enhance the job skills of individuals in order to increase their employability, identify job opportunities, and/or help job seekers obtain employment. These programs, which are administered by nine separate federal agencies--including the Departments of Labor, Education, and Health and Human Services (HHS)--spent about $18 billion dollars in fiscal year 2009 to provide services such as job search assistance and job counseling to program participants. Seven programs accounted for about three-fourths of this spending, and two--Wagner- Peyser funded Employment Service (ES) and WIA Adult--together reported serving over 18 million individuals, or about 77 percent of the total number of participants served across all programs. Forty-four of the 47 programs we identified, including those with broader missions such as multipurpose block grants, overlap with at least one other program in that they provide at least one similar service to a similar population. However, differences may exist in eligibility, objectives, and service delivery. Almost all of the 47 programs tracked multiple outcome measures related to employment and training, and the most frequently tracked outcome measure was "entered employment." However, little is known about the effectiveness of employment and training programs because, since 2004, only 5 reported conducting an impact study, and about half of all the remaining programs have not had a performance review of any kind. The multiplicity of employment and training programs combined with the limited information regarding impact raise concerns about the extent to which the federally-funded employment and training system is performing as efficiently and effectively as it should. As early as the 1990s we issued a series of reports that raised questions about the efficiency and effectiveness of the federally-funded employment and training system, and we concluded that a structural overhaul and consolidation of these programs was needed. Partly in response to such concerns, 13 years ago Congress passed WIA. This testimony focuses on two areas where we have identified opportunities to promote greater efficiencies: colocating services and consolidating administrative structures Increasing colocation of services at a single site, as well as consolidating state workforce and welfare administrative agencies, could increase efficiencies, and several states and localities have undertaken such initiatives. However, implementation may pose challenges and little information is available about the strategies and results of these initiatives. To facilitate further progress in increasing administrative efficiencies, we have previously recommended that the Secretaries of Labor and HHS work together to develop and disseminate information about such efforts. Sustained congressional oversight is pivotal in promoting further efficiencies. Specifically, Congress could explore opportunities to foster state and local innovation in integrating services and consolidating administrative structures. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Ford class features a number of improvements over existing aircraft carriers that the Navy believes will improve the combat capability of the carrier fleet while simultaneously reducing acquisition and life cycle costs. These improvements include an increased rate of aircraft deploying from the carrier (sorties), reduced manning, significant growth in electrical generating capability, and larger service life margins for weight and stability to support future changes to the ship during its expected 50-year service life. To meet its requirements, the Navy developed over a dozen new technologies for installation on Ford-class ships (see appendix II). For example, advanced weapons elevators, using an electromagnetic field to transport weapons within the ship instead of cables, are expected to increase payload capacity by 229 percent as compared to Nimitz-class carriers, while also facilitating reduced manning and higher sortie generation rates. Other technologies allowed the Navy to implement favorable design features into the ship, including an enlarged flight deck, a smaller, aft-positioned island, and a flexible ship infrastructure to accommodate changes during the ship’s service life. As we have previously reported, of the critical technologies, three have presented some of the greatest challenge during development and construction: Electromagnetic Aircraft Launch System (EMALS), which uses an electrically generated moving magnetic field to propel aircraft that places less physical stress on aircraft as compared to legacy steam catapult launchers on Nimitz-class carriers. Advanced Arresting Gear (AAG) is an electric motor based aircraft recovery system that rapidly decelerates an aircraft as it lands. AAG replaces legacy hydraulic arresting equipment currently in use on Nimitz-class carriers. Dual Band Radar (DBR) integrates two component radars—the multifunction radar and the volume search radar—to conduct air traffic control, ship self-defense, and other operations. The multifunction radar includes horizon search, surface search, navigation, and missile communications. The volume search radar includes long-range, above horizon surveillance and air traffic control capabilities. As is typical in Navy shipbuilding, Ford-class carrier construction occurs in several phases and includes several key events, including the following: Pre-construction and planning: Long-lead time materials and equipment are procured and the shipbuilder plans for beginning ship construction. Block fabrication, outfitting, and erection: Metal plates are welded together to form blocks, which are the basic building components of the ship. The blocks are assembled and outfitted with pipes, brackets for machinery or cabling, ladders, and any other equipment that may be available for installation. Groupings of blocks form superlifts, which are then lifted by crane into dry dock and welded into the respective location of the ship. Launch: After the ship is watertight, it can be launched—floated in the water—then towed into a quay or dock area where remaining construction and outfitting of the ship occurs. Shipboard testing: Once construction and system installations are largely complete, the builder will test the ship’s hull, mechanical and electrical systems, and key technologies to demonstrate compliance with ship specifications and provide assurance that the items tested operate satisfactorily within permissible design parameters. Delivery: Once the Navy is satisfied that the ship is seaworthy and the shipbuilder has met requirements, the shipyard transfers custody of the ship to the Navy. Post-delivery activities: After ship delivery, tests are conducted on the ship’s combat and mission-critical systems, the ship’s air wing— consisting of the assigned fixed and rotary wing aircraft, pilots, support and maintenance personnel—is brought onto the ship, and the crew begins training and operating the ship while at sea. A period of planned maintenance, modernization, and correction of government-responsible deficiencies follows—referred to as Post Shakedown Availability. Deployment ready: The last stage of the ship acquisition process occurs when all crew and system operational tests, trainings, and certifications have been obtained and the ship has achieved the necessary level of readiness needed to embark on its first deployment. During and after construction, DOD acquisition policy requires major defense programs, including shipbuilding programs, to execute and complete several types of testing, while the ship progresses toward operational milestones including the point during the acquisition process when the fleet initially receives and maintains the ship: Developmental testing is intended to assist in the maturation of products, product elements, or manufacturing or support processes. For ship technologies, developmental testing typically includes land- based testing activities prior to introducing a new technology in a maritime environment and commencing with shipboard testing. Developmental testing does not include testing systems in concert with other systems. Integration testing is intended to assess, verify, and validate the performance of multiple systems operating together to achieve required ship capabilities. For example, integration testing would include among other things, testing the operability of the DBR in a realistic environment where multiple antennas and arrays are emitting and receiving transmissions and multiple loads are placed upon the ship’s power and cooling systems simultaneously. Initial Operational Test and Evaluation (IOT&E) is a major component of post-delivery testing intended to assess a weapon system’s capability in a realistic environment when maintained and operated by sailors, subjected to routine wear-and-tear, and employed in combat conditions against simulated enemies. During this test phase, the ship is exposed to as many actual operational scenarios as possible to reveal the weapon system’s capability under stress. The Navy schedules and plans these test phases and milestones using a test and evaluation master plan (TEMP) that is approved by the Deputy Assistant Secretary of Defense for Developmental Test and Evaluation (DT&E) and the Director for Operational Test and Evaluation (DOT&E). The Deputy Assistant Secretary of Defense for DT&E leads the organization within the Office of the Secretary of Defense that is responsible for providing developmental test and evaluation oversight and support to major acquisition programs. The Director, DOT&E leads the organization within the Office of the Secretary of Defense that is responsible for providing operational test and evaluation oversight and support to major defense acquisition programs. Due to their vast size and complexity, aircraft carriers require funding for design, long-lead materials, and construction over many years. To accomplish these activities on the Ford class, the Navy has awarded contracts for two phases of construction—construction preparation and detail design and construction—which are preceded by the start of advance procurement funding. Since September 2008, Newport News Shipbuilding has been constructing CVN 78 under a cost-reimbursement contract for detail design and construction of CVN 78. This contract type places significant cost risk on the government, which may pay more than budgeted should costs be more than expected. The Navy now expects to largely repeat the lead ship design for CVN 79, with some modifications, and construct that ship under a fixed-price incentive contract, which generally places more risk on the contractor. To ensure the Navy adheres to its cost estimates, Congress, in the National Defense Authorization Act for Fiscal Year 2007, established a $10.5 billion procurement cost cap for CVN 78, and an $8.1 billion cost cap for each subsequent carrier.the cost cap are necessary, it must first obtain statutory authority from Congress, which means it would be required to submit a proposal to Congress increasing the cost cap. The 2007 legislation also contains six provisions that allow the Navy to make adjustments to the cost cap (increasing or decreasing) without seeking statutory authority: If the Navy determines adjustments to cost changes due to economic inflation; costs attributable to shipbuilder compliance with changes in Federal, State, or local laws; outfitting and post-delivery costs; insertion of new technologies onto the ships; cost changes due to nonrecurring design and engineering; and costs associated with correction of deficiencies that would otherwise preclude safe operation and crew certification. The National Defense Authorization Act for Fiscal Year 2014 further expanded the list of allowable adjustments, solely for CVN 78, to include cost changes due to urgent and unforeseen requirements identified during shipboard testing. Since 2007, the Navy has sought and been granted adjustments to CVN 78’s cost cap to the current amount of $12.9 billion, which were attributed to construction cost overruns and economic inflation. In 2013, the Navy increased CVN 79’s cost cap to $11.5 billion, citing inflation and additional non-recurring design and engineering work. Subsequently, the National Defense Authorization Act for Fiscal Year 2014 increased the legislated cost cap for any follow-on ship in the Ford-class to $11.5 billion. In addition, the Navy delayed CVN 79’s delivery by 6 months, from September 2022 to March 2023, to reflect changes in the ship’s budget. Figure 1 outlines the Navy’s acquisition timeline for the Ford class, along with adjustments made to the legislated cost cap throughout the course of the shipbuilding program. In August 2007 and September 2013, we reported on the programmatic challenges associated with technology development, design, construction, and testing of the lead ship (CVN 78). In our 2007 report, we noted that delays in Ford-class technology development and overly optimistic cost estimates would likely result in higher lead ship costs than what the Navy allotted in its budget. We recommended actions to improve the realism of the CVN 78 budget estimate and the Navy’s cost surveillance capacity, as well as develop carrier-specific tests of the DBR to ensure the radar meets carrier-specific requirements. The Navy addressed some, but not all, of our recommendations. Our 2013 report found delays in technology development, material shortfalls, and construction inefficiencies were contributing to increased lead ship construction costs and potential delays to ship delivery. We also found the Navy’s ability to demonstrate CVN 78’s capabilities after delivery was hampered by test plan deficiencies, and reliability shortfalls of key technologies could lead to the ship deploying without those capabilities. Lastly, we concluded that ongoing uncertainty in CVN 78’s construction could undermine the Navy’s ability to realize additional cost savings during construction of CVN 79—the follow on ship. These findings led to several recommendations to DOD: conduct a cost-benefit analysis on required CVN 78 capabilities, namely reduced manning and the increased sortie generation rate, in light of known and projected reliability shortfalls for critical systems; update the Ford-class program’s test and evaluation master plan to allot sufficient time after ship delivery to complete developmental test activities prior to beginning integration testing; adjust the planned post-delivery test schedule to ensure that system integration testing is completed before IOT&E; defer the CVN 79 detail design and construction contract award until land-based testing for critical systems is complete; and, update the CVN 79 cost estimate on the basis of actual costs and labor hours needed to construct CVN 78. While DOD agreed with some of our recommendations, it did not agree with our recommendation to defer the award of CVN 79’s detail design and construction contract until certain testing of critical technology systems were completed, noting that deferring contract award would lead to cost increases resulting from the required re-contracting effort, among other things. Shortly after we issued our report, however, the Navy postponed awarding the construction contract until the first quarter of fiscal year 2015, citing the need for additional time to negotiate more favorable pricing with the shipbuilder as well as for the shipbuilder to continue to implement and demonstrate cost savings. The extent to which CVN 78 will be delivered within the Navy’s revised schedule and cost goals is dependent on deferring work and costs to the ship’s post-delivery period. Meeting CVN 78’s current schedule and cost goals will require the shipbuilder to overcome lags in the construction schedule. Successful tests of the equipment and systems now installed on the ship (referred to as shipboard testing) will also be necessary. However, challenges with certain key technologies are likely to further exacerbate an already compressed test schedule. With the shipbuilder embarking on one of the most complex phases of construction with the greatest likelihood for cost growth, cost increases beyond the current $12.9 billion cost cap appear likely. In response, the Navy is deferring work until after ship delivery to create a reserve to help ensure that funds are available to pay for any additional cost growth stemming from remaining construction risks. In essence, the Navy will have a ship that is less complete than initially planned at ship delivery, but at a greater cost. The strategy of deferring work will result in the need for additional funding later, which the Navy plans to request through its post-delivery and outfitting budget account—Navy officials view this plan as an approach to managing the cost cap. However, increases to the post-delivery and outfitting budget account are not captured in the total end cost of the ship, thereby obscuring the true costs of the ship. The shipbuilder appears to have resolved many of the engineering and material challenges that we reported in September 2013. These challenges resulted in inefficient and out-of-sequence work that led to a revision of the construction and shipboard test schedules and contributed to an increase to the ship’s legislated cost cap from $10.5 billion to the current $12.9 billion.remaining to complete construction and the shipboard test program under way, the lagging effect of these issues is creating a backlog of construction activities that further threaten the ship’s revised delivery date and that may lead to further increased costs. As we have found in our previous work, additional cost increases are likely to occur because the remaining work on CVN 78 is generally more complex than much of the work occurring in the earlier stages of construction. Nevertheless, with about 20 percent of work As shown in table 1, the shipbuilder continues to face a backlog of construction activities, including completing work packages, which are sets of defined tasks and activities during ship construction and are how the shipbuilder manages and monitors construction progress through the construction master schedule; outfitting of individual compartments on the ship; and transferring custody of completed compartments and hull, mechanical, and electrical systems to the Navy, referred to as “compartment and system turnover.” As the shipbuilder completes construction and compartment outfitting activities, the shipboard testing phase of the project commences. This testing is scheduled to be completed by early February 2016 on the ship’s hull, mechanical, and electrical systems, about 2 months before the anticipated end of March 2016 delivery date. The shipboard test program is meant to ensure correct installation and operation of the equipment and systems in a maritime environment. This is a complex and iterative process that requires sufficient time for discovering problems inherent with the start-up and initial operation of a system, performing corrective work, and retesting to ensure that the issues have been resolved. However, as a result of previous schedule delays, the shipbuilder compressed the shipboard test plan, resulting in a schedule that leaves insufficient time for discovery and correction should problems arise. Further, the construction delays discussed above directly affect the builder’s ability to test the ship’s hull, mechanical, and electrical systems, thus increasing the likelihood of additional testing delays. For example, testing of the ship’s fire sprinklers was delayed because construction of the sprinkling system was not completed on time. In other instances, delays stemming from construction can have a cascading effect on the test program. As another example, testing of the ship’s plumbing fixtures was delayed until testing of the potable water distribution system was completed and the system activated. Another integral part of the shipboard test program is testing the ship’s key technologies, many of which are being operated for the first time in a maritime environment, and ensuring that these technologies function as intended. Four of these technologies are instrumental in executing CVN 78’s mission—AAG, EMALS, DBR, and the advanced weapons elevators. Although these technologies are, for the most part, already installed on the ship, certain technologies are still undergoing developmental land- based testing. Except for the advanced weapons elevators, which are managed by the shipbuilder, the other technologies are being developed by separate contractors, with the government providing the completed system to the shipbuilder for installation and testing. The shipboard test programs for EMALS and the advanced weapons elevators are currently under way, while AAG and DBR testing is scheduled to commence in fiscal year 2015. However, developmental testing for AAG, EMALS, and DBR is taking place concurrently at separate land-based facilities (as well as aboard the ship). This situation presents the potential for modifications to be required for the shipboard systems that are already installed if land- based testing reveals problems. Three of the systems we reported on in our last report in September 2013—AAG, EMALS, and DBR—have since experienced additional developmental test delays (as shown in figure 2). Following is more information on the status of testing of these key technologies. Shipboard testing for AAG is scheduled to begin in March 2015, but according to the CVN 78 program office, the AAG contractor is redesigning equipment on the system’s hydraulic braking system by adding additional filtration and the shipbuilder is replacing associated piping, which will likely delay the start of system testing. In addition, the AAG contractor has to complete over 50 modifications to the systems before shipboard testing can begin; these modifications are needed to address issues identified during developmental testing at the land-based test site. As we previously found, AAG experienced several failures during land-based testing, which led to redesign and modification of several subsystems, most notably the water twisters—a device used to absorb energy during an aircraft arrestment. CVN 78 program officials expressed concerns that the rework cannot be completed on time to support the current shipboard test schedule, and attribute the delays to the immaturity of AAG when it was installed on the ship. The shipboard test program is further at risk because additional design changes and modifications to the shipboard AAG units remain likely. This is because the Navy will now be conducting land-based testing of AAG even as shipboard testing is under way. As a result of issues discussed above, the Navy further delayed the schedule for land-based testing (as shown in figure 2) and changed the test strategy to better ensure that it could meet the schedule for testing live aircraft aboard the ship. AAG’s previous land-based test plan was to sequentially test each aircraft type planned for CVN 78 as a simulated load on a jet car track. After completing jet car track testing for all aircraft types, the actual aircraft were to be tested with the AAG system on a runway. This strategy allowed for discovery of issues with each aircraft type prior to advancing to the next stage of testing. However, earlier this year the AAG program office changed its strategy so that each aircraft type will be tested sequentially at the jet car track and runway sites. Once an aircraft completes both types of testing, testers will re-configure the sites to test the next type of aircraft, according to AAG program officials. Figure 3 shows the difference in AAG test strategies along with the overall ship test schedule. The program office plans to complete this revised testing approach with the F/A-18 E/F Super Hornet fighter first, as this aircraft will be most in use aboard the carrier. While the Navy stated that this change was necessary to ensure that at least one aircraft type would be available to certify the system for shipboard testing, it further increases the potential for discovering issues well past shipboard testing and even ship delivery. The shipbuilder began EMALS activation and shipboard testing activities in August 2014, as planned. This is the first time EMALS is being operated and tested in a maritime environment, in a multiple catapult configuration, using a shared power source, with multiple electromagnetic fields. Any additional delays with the EMALS shipboard test schedule will directly affect CVN 78’s delivery date. Specifically, a key aspect of the test program is testing the system’s launch capabilities by using weighted loads that simulate an aircraft—referred to as dead-loads—off of the flight deck of the carrier. This test must be completed by November 2015, the point at which the shipbuilder is scheduled to turn the front of the ship toward the dock to begin testing the ship’s propulsion system in preparation for subsequent sea trials. At the same time, land-based testing for EMALS is still on-going and the Navy now anticipates testing will be completed during the third quarter of fiscal year 2016. Shipboard testing is scheduled to begin in January 2015, but according to the CVN 78 program office, the DBR contractor must first make 5 modifications to the installed radar system prior to its initial activation. In particular, the power regulating system needs to be modified, which requires removal, modification, and re-installation of certain power control modules. Shipbuilder officials told us that any delay to the installation of these items will likely affect the DBR shipboard test schedule, but according to the DBR program office, software and hardware modifications to correct this issue are complete and the ship-set units are in production. Program officials do not anticipate additional changes to the system’s hardware prior to commencing shipboard testing, but they do expect further software modifications as land-based development testing progresses. As a result, there is the risk that additional modifications to the shipboard DBR system will be required. In addition, land-based testing of the DBR is based on a conglomeration of engineering design models that is not representative of the version of the radar installed on the ship, which further increases the likelihood that shipboard testing will require more time and resources than planned. Shipboard testing of components to the advanced weapons elevators began in February 2012, but testing has not proceeded as planned. As of August 2014, the shipbuilder had operated 4 of the ship’s 11 weapons elevators, but testing delays have occurred due to faulty components and software integration challenges, and premature corrosion of electrical parts. The shipbuilder has increased the amount of construction labor allocated to the weapons elevators in an effort to recover from these schedule delays. CVN 78’s schedule has limited ability to absorb the additional delays that appear likely, given the remaining construction and testing risks. A delay in the ship’s planned March 2016 delivery could result in a breach of DOD’s acquisition policy. Among other things, a breach would require the CVN 78 program manager to seek approval from the Navy and DOD to further revise the schedule. Shipbuilder officials maintain that they can meet the ship’s revised delivery date, but acknowledge that the revised shipboard test plan is proving challenging because of delays associated with construction and concurrent developmental testing of key technologies discussed above. To regain lost schedule, the shipbuilder may choose to expend additional labor hours by paying workers overtime or hiring subcontracted labor; however, these actions would result in additional and unanticipated costs. The CVN 78 program’s costs are approaching the legislative cost cap budget of $12.9 billion, but further cost growth is likely based on performance to date as well as ongoing construction, shipboard testing and technology development risks. To improve the likelihood of meeting the March 2016 delivery date and to compensate for potential cost growth, the Navy is (1) removing work from the scope of the construction contract and (2) deferring purchase and installation of some mission- related systems provided by the government to the shipbuilder until after ship delivery. Consequently, completion of CVN 78 may not occur until years later than initially planned. According to the CVN 78 program office, this approach creates a funding reserve to cover cost growth due to unknowns in the shipboard test program, particularly given that many of the ship’s systems are being operated and tested for the first time in a maritime environment. However, the value of the deferred work may not be adequate to fully fund all remaining costs needed to produce an operational ship. Table 2 shows the type of work being deferred from the current plan to post-delivery, and the program office’s estimated value of the work. As of September 2014, program officials said they are still negotiating with the shipbuilder on the dollar value of construction labor that it plans to descope from CVN 78’s construction contract. The program office plans to use this approximately $96 million reserve in the likely event there is additional cost growth above the $12.9 billion budgeted cost cap. However, given the on-going construction and testing risks previously discussed, this cash reserve is unlikely to be adequate to cover the entire expected cost growth of the ship. As shown in Table 3, the shipbuilder, CVN 78 program office, and the Naval Sea Systems Command Cost Engineering Office (the Navy’s cost estimators), are all forecasting a cost overrun at ship completion ranging from $780 million to $988 million. According to shipbuilder and CVN 78 program office estimates, the program will meet the $12.9 billion legislated cost cap and has sufficient funds to cover the anticipated cost overruns. If, however, costs increase according to the Naval Sea Systems Command Cost Engineering Office’s estimate or higher, additional funding will be needed above the cash reserve amount. Further, cost and analyses offices within the Office of the Secretary of Defense have tracked the ship’s costs for several years and report that without significant improvements in the program’s overall cost performance, CVN 78’s total costs will likely exceed the program’s $12.9 billion cost cap by approximately $300 million to $800 million.fall within this range, the Navy will need to either defer additional work to post-delivery or request funding under the ship’s procurement budget line above the $12.9 billion cap. Under the cost cap legislation, such an action would require prior congressional approval. To fund work deferred to the post-delivery period in the event of unbudgeted cost growth, the CVN 78 program office is considering using funding from the Outfitting and Post-Delivery budget account. Program officials noted that other Navy shipbuilding programs have also used funds from the outfitting and post-delivery accounts to complete deferred construction work. Navy officials view this as an approach to managing the cost cap. At the same time, however, because the Navy considers post-delivery and outfitting activities as “non end-cost” items—meaning that funds from this account are not included when calculating the total construction cost of the ship—visibility into the ship’s true construction cost is obscured. CVN 78 will not demonstrate its required capabilities prior to deployment because it cannot achieve certain key requirements according to its current test schedule. Specifically, the ship will not have demonstrated its increased sortie generation rate (SGR), due to low reliability levels of key aircraft launch and recovery systems, and required reductions in personnel remain at risk. The Navy expected both of these requirements to contribute to greater capability and lower costs than Nimitz-class carriers. Further, the ship is likely to face operational shortfalls resulting from a ship design that restricts accommodations. Finally, tight time frames for post-delivery testing of key systems due to aforementioned technology development delays could result in the ship deploying without fully tested systems if deployment dates remain unchanged. The Navy’s business case for acquiring the Ford-class depended on significantly improved capabilities over legacy Nimitz-class carriers, specifically an increased SGR and reduced manning profile. The Navy anticipated that these capabilities would reduce total ownership costs for the ship. Our September 2013 report found several shortfalls in the Navy’s projections for meeting the SGR and reduced manning requirements, and our current work found continuing problems in these areas. The Navy used the SGR requirement to help guide ship design, but CVN 78 will not be able to fully demonstrate this capability before the ship is deployment ready. As shown in table 4, CVN 78’s SGR requirements are higher than the demonstrated performance of the Nimitz-class. The increased SGR requirement for the Ford-class reflected earlier DOD operational plans to mount campaigns in two theaters simultaneously. Under this scenario, a high SGR was essential to quickly achieving warfighting objectives, but according to Navy officials, this requirement is no longer reflective of current operational plans. The Navy plans to demonstrate CVN 78’s SGR requirement using a modeling and simulation program in 2019, near the end of CVN 78’s IOT&E period. As the modeling and simulation program continues to mature and develop, the Navy, according to the TEMP, plans to collect data from a sustained and surge flight operation and then incorporate these data into the model. Once this is completed and the model is accredited, the Navy will subsequently run a simulation of the full SGR mission. Current runs of the model indicate the ship can meet the required sustained and surge sortie rates, which Navy and shipbuilding officials involved with the modeling and simulation effort explained is primarily due to flight deck redesign and not the ship’s new aircraft launch and recovery technologies. However, ongoing issues with the development of EMALS and AAG are resulting in low levels of system reliability that will be a barrier to achieving required SGR rates once the model is populated with actual data from these technologies. System reliability is critical to the carrier’s ability to meet the SGR requirement and is measured in terms of mean cycles between critical failures, or the average number of times each system launches or recovers aircraft before experiencing a failure. As shown in table 5, the most recent available metrics from January 2014 show that EMALS and AAG show such low reliability rates that it is unlikely that these systems will achieve reliability rates needed to support SGR requirements before the demonstration event in 2019 or for years after the ship is deployment ready. As a result of these systems’ low reliability, we questioned the Navy’s sortie generation requirement in our September 2013 report and recommended that the Navy re-examine whether it should maintain this requirement or modify it—seeking requirements relief from the Joint Requirements Oversight Council if the Navy found it was not operationally necessary. DOT&E has also raised questions about the need for increased sortie generation. DOT&E analyzed past aircraft carrier operations in major conflicts and reported that the CVN 78 SGR requirement is well above historical levels. In its January 2014 annual report, DOT&E cited the poor reliability of critical systems, such as EMALS and AAG, noting that performance of these systems could cause a series of delays during flight operations that could make the ship more vulnerable to attack. DOT&E plans to assess CVN 78 performance during IOT&E by comparing its demonstrated SGR to the demonstrated performance of the Nimitz-class carriers. Although the carrier would not meet its required capability, DOT&E stated that a demonstrated SGR less than the CVN 78 requirement, but equal to or greater than the performance of the Nimitz-class, could potentially be acceptable. However, the Navy would still be required to obtain approval from the Joint Requirements Oversight Council to lower the requirement. Another CVN 78 key performance requirement is a reduced ship’s force, relative to the Nimitz-class, with the goal of lowering total operational costs. “Ship’s force” refers to all personnel aboard a carrier except those designated as part of the air wing and in certain support or other assigned roles. The Navy’s reduced manning requirement for CVN 78 is a ship’s force that has 500 to 900 fewer personnel than Nimitz-class carriers. Table 6 compares manning totals for the Nimitz class with Ford-class manning projections. As of September 2014, the Navy projects a 663-sailor reduction in the ship’s force, which represents a 163-person margin over the minimum required reduction of 500 personnel. But our analysis found that the carrier is not likely to achieve this level of reduction and still meet its intended capabilities. Key factors contributing to the difficulties in meeting the reduced manning requirement include the following: Poor reliability of key systems—including EMALS and AAG—and sailors’ limited experience in operating these systems in a maritime environment, which may require additional personnel. For example, AAG will require additional maintenance than planned due to changes to the system’s hydraulic braking system, according to Navy officials. Additional ship’s force personnel will be needed to meet the surge SGR of 270 sorties per day, based on the Navy’s most recent operational test and evaluation force assessment. Additional operational personnel, particularly in the supply department, will likely be needed on the ship, according to the CVN 78 pre-commissioning unit—the crew assigned to the ship while it is under construction. These factors are likely to increase the total number of personnel on CVN 78. As a reflection of the Navy’s confidence in reducing manning on the Ford class, the ships were designed with significantly fewer berths (4,660) as compared to the Nimitz class to accommodate the ship’s force, air wing, and all other embarked personnel. However, now the number of berthings is fixed and the ship cannot accommodate additional manpower without significant design changes. Further, the Navy requires new ship designs, including CVN 78, to provide a habitability margin—a percentage of extra berths above the projected ship’s force to accommodate potential personnel growth throughout the service life of the ship. This margin includes berths as well as support services for personnel aboard the ship, such as food and sanitation facilities. Given current manning projections and available accommodations, as shown in table 7, the Navy recognizes that CVN 78 falls well short of meeting its required habitability margin. This required margin is equivalent to 10 percent of the ship’s force or 263 berths. As a result, the CVN 78 program office plans to request a waiver for this requirement from the Chief of Naval Operations. In fact, the carrier currently has so few extra berths that it can only accommodate a slight increase in personnel. And the Navy’s estimated accommodation needs do not take into account the likelihood that additional personnel will be needed above and beyond the Navy’s current projected ship’s force (2,628 sailors). In addition, spare berthing is also used for personnel temporarily assigned to the ship, such as inspectors, trainers, or visitors. If CVN 78 must enlarge its ship’s force as well as accommodate personnel temporarily assigned to the ship, it is likely that no actual accommodations would be available. Consequently, CVN 78 must be “manning neutral,” so that personnel coming aboard must be matched by personnel debarking, in accordance with the ship’s operational needs and personnel specialties. This situation is further exacerbated because the Navy will need to operate CVN 78 with a greater percentage of its crew than the Nimitz class. According to the Navy’s most recent (2011) analysis of manning options for CVN 78, staffing the ship at less than 100 percent; that is, with fewer personnel than the current projected total force of 4,533, had an adverse effect on quality of life at sea because the crew had to perform additional duties or remain on duty for longer periods. This manning analysis also found that reducing staffing to 85 percent—which is typical for a Nimitz-class ship—compromised ship operations. The analysis concluded that careful management of personnel specializations will be needed and recommended cross-training personnel in key departments to minimize the risk to ship operations. Future costs for the ship could also increase if the Navy must eventually convert spaces to accommodate additional berthing. The Navy has further compressed post-delivery plans to test CVN 78’s capabilities and increased concurrency between test phases since our last report in September 2013. This means that there will be less time for operational testing, which is the Navy’s opportunity to test and evaluate the ship against realistic conditions before its first deployment. As we reported in September 2013, the Navy added in 2012 an additional integration test period to the CVN 78 TEMP as recommended by the Deputy Assistant Secretary of Defense for DT&E and the Director, DOT&E. This integration testing is important because it allows ship systems still in development—such as EMALS and AAG—to be tested together in their shipboard configuration. In our report, we recommended that the Navy adjust its planned post-delivery test schedule to complete this integration testing before commencing IOT&E. The Navy did not agree and overlap between integration testing and IOT&E remains and is now longer. This situation constrains the Navy’s ability to discover and resolve problems during the integration testing phase and before beginning IOT&E, which further limits opportunities for the Navy to resolve problems discovered during testing and risks additional discovery during IOT&E. In addition, the Navy and DOD still have not resolved whether CVN 78 will be required to conduct the Full Ship Shock Trial for the Ford-class. As we reported last year, the program office deferred this testing to the follow-on ship, CVN 79; a strategy that did not receive DOT&E approval. According to program officials, final determination of whether the trial will be conducted on CVN 78 or CVN 79 will be made by the Under Secretary of Defense for Acquisition, Technology, and Logistics near the end of 2014. Since our last report, the Navy doubled the length of the new integration testing period, but clarified that this testing also includes ongoing developmental testing of key systems, assessment of prior test results, and repairs or changes to fix deficiencies identified in earlier test periods. In fact, the Navy plans to conduct well over a dozen certifications and major ship test events during this period. For example, it plans to conduct a total ship survivability trial—testing CVN 78’s capability to recover from a casualty situation and the extent of mission degradation in a realistic operational combat environment. If the Navy discovers significant issues during testing, or events cause additional delays to testing, it will have to choose whether deploy a ship without having fully tested systems or delay deployment until testing is complete. To help manage this risk, the Navy plans to divide operational testing into two phases. According to program officials, this approach will allow developmental testing, deficiency correction, and integration testing to continue on the mission-related systems installed after ship delivery and on those systems that are not required to support the first phase of operational testing. The first phase of operational testing will focus on testing the ship’s ability to accomplish basic tasks by stressing the ship’s crew, aviation facilities, and the combat and mission-related systems installed prior to delivery under realistic peacetime operating conditions. The second phase of operational testing incorporates embarked strike groups and other detachments that support operations and tests CVN 78’s ability to conduct major combat operations, particularly the tactical employment of the air wing in simulated joint, allied, coalition, and strike group environments. The goal is to stress CVN 78’s aviation, combat and mission-related systems, particularly those systems installed after ship delivery. Figure 4 shows these changes to the CVN 78 post-delivery test schedule. The current test schedule is optimistic, with little room for delays that may occur as a result of issues identified during the integration and operational test phases. Even if the Navy meets the current schedule, it will not complete all necessary testing in the time remaining before the ship is deployment ready. This issue will be further exacerbated if land-based or shipboard testing discussed earlier reveals significant problems with the ship’s systems, as the time needed to address such issues may interfere with the ship’s integration and operational test phases. Navy officials responsible for operational testing stated that they will only conduct operational testing when shipboard systems are deemed ready. However, neither the CVN 78 program office nor the Navy’s operational test personnel know how often system testing can be deferred before affecting the schedule for operational testing on other systems, particularly given the interoperation of systems on a carrier. For example, the DBR supports ship combat systems and simultaneously conducts air traffic control. If it is not ready to support flight operations in the first segment of IOT&E, combat operations in the second segment that also rely on the radar are likely to be affected. To meet the $11.5 billion legislative cost cap for CVN 79, the Navy is assuming the shipbuilder will make efficiency gains in construction that are unprecedented for aircraft carriers and has proposed a revised acquisition strategy for the ship. With shipbuilder prices for CVN 79 growing beyond the Navy’s expectations, the Navy extended the construction preparation (CP) contract to allow additional time for the shipbuilder to reduce cost risks prior to awarding a construction contract. In addition, the Navy’s proposed revision to the ship’s acquisition strategy would reduce a significant amount of work needed to make the ship fully operational until after ship delivery. While this strategy may enable the Navy to initially achieve the cost cap and is allowed under the cost cap provision without the need for congressional approval, it also results in transferring the costs of planned capability upgrades—previously included in the CVN 79 baseline—to future maintenance periods to be paid through other (non-CVN 79 shipbuilding) accounts. The Navy’s $11.5 billion cost estimate for CVN 79 is underpinned by the assumption that the shipbuilder will significantly lower construction costs through realizing efficiency gains. While performance to date has been better than that of CVN 78, early indicators suggest that the Navy is unlikely to realize anticipated efficiencies at the level necessary to meet cost and labor hour reduction goals. In its May 2013 report to Congress on CVN 79 program management and cost control measures, the Navy stated that 15-25 percent fewer labor hours (about 7 million to 12 million hours) will be needed to construct CVN 79 as compared to CVN 78. Although the Navy and shipbuilder continue to look for labor hour reduction opportunities, thus far, shipbuilder representatives have identified improvements that they stated will save about 800,000 labor hours. As we identified in September 2013, many of the proposed labor hour reductions are attributed to lessons learned during construction of CVN 78 and revising CVN 79’s build plan to perform pre-outfitting work earlier in the build process. This is because work completed earlier in the build process, such as in a shop environment, is more efficient and less costly than work done later on the ship where spaces are more difficult to maneuver within. In addition, the shipbuilder’s revised build plan consolidates and increases the size of superlifts—fabricated units and block assemblies that are grouped together and lifted into the dry dock— to form larger sections of the ship. Other notable labor hour savings initiatives involve increased use of new welding technologies and improved cable installation techniques. Construction of CVN 79 is still in the initial stages, and most of the projected cost savings and labor hour reduction opportunities are in structural units and parts of the ship that are not yet under construction. However, there are indications that achieving the anticipated 7 million to 12 million hour reduction goal will be challenging. As of the end of March 2014, the shipbuilder had completed fabrication of 205 structural units— about 18 percent of the ship’s total—with over a hundred more in various stages of fabrication. Although the ship is still in the early stages of construction, the cumulative labor hour reductions for the completed units fell short of the Navy and shipbuilder’s expected reduction by about 3.5 percent, as shown in figure 5. Program officials stated that while the cumulative reduction has not yielded the expected results, a number of the structural units were completed prior to the shipbuilder’s implementation of labor saving initiatives. They further added that completed units, more representative of remaining work, have yielded approximately a 16 percent reduction in labor hours for fitters and welders. In addition, the shipbuilder’s scheduling processes may further limit insight into the effectiveness of these initiatives. We evaluated the shipbuilder’s processes and tools used to plan and schedule work against GAO’s best practices in scheduling. We identified scheduling practices that may interfere with the shipbuilder’s and Navy’s ability to accurately manage and monitor the construction schedule and the way in which the shipbuilder allocates labor, equipment, and material resources. In particular, the shipbuilder’s enterprise resource management system (which tracks use of labor and materials) and master construction schedule (which tracks the time required to complete work packages) are stand-alone, independent systems, which means that changes in one system are not automatically updated in the other. Consequently, the shipbuilder—and subsequently the government—lacks real time insight in to whether resources are being used according to schedule. This lack of insight limits management’s ability to effectively respond to delays, thus driving inefficiencies into the build process, and also limits the shipbuilder’s ability to take advantage of opportunities when work is completed ahead of schedule. Although the shipyard is transitioning to a new scheduling software program, the shipbuilder does not plan to revise its existing scheduling and resource management process to enable better insight for CVN 79. The legacy scheduling system the shipbuilder employed did not allow for data to be exported to the government. The new scheduling system has the ability to allow for increased Navy oversight since the data are exportable, thus allowing, among other things, the ability to independently examine the effects of schedule slippage or realism of the shipbuilder’s estimated labor needs. According to program officials, the Navy intends to incorporate this data as a deliverable item in the CVN 79 construction contract. Even with the shipbuilder’s improvements, reducing construction of CVN 79 by approximately 7 million to 12 million labor hours as compared to CVN 78 would be unprecedented in aircraft carrier construction. As shown in table 8, with each successive aircraft carrier build, the number of labor hours needed to complete construction has, at most, decreased by 9.3 percent as compared to the previous ship (with CVN 69 compared to CVN 68 accounting for the largest percentage decrease). Although CVN 78 and CVN 79 are similar to CVN 68 and CVN 69 in that there is a first-to-second ship of a class transition, in most instances sizeable labor hour reductions only occurred as a result of constructing two aircraft carriers though a single contract, rather than acquiring the ships individually through separate construction contracts as is the case with the Ford class. The Navy planned to award the CVN 79 detail design and construction contract in late fiscal year 2013, but subsequently delayed the award and extended the construction preparation contract because negotiations with the shipbuilder were taking longer than the Navy anticipated. As a result, the Navy now intends to award the detail and design contract at the end of the first quarter of fiscal year 2015, which program officials stated allows sufficient time to negotiate prices and demonstrate cost reductions and process improvements that will lead to lowering CVN 79’s construction costs. In the meantime, more work is now being completed under the construction preparation contract, with almost 60 percent of the ship’s total structural units under the CP contract, as shown in table 9 below. According to program officials, this work accounts for about 20 percent of the ship’s overall construction effort. By extending the CP contract, the program office expects that it will reduce material costs by 10-20 percent from CVN 78 and prevent late deliveries of items, such as valves, that led to significant material shortfalls and out-of-sequence construction work and contributed to that ship’s cost growth, as we noted in our September 2013 report. Under the Navy’s material procurement strategy, approximately 95 percent of CVN 79’s material to be procured by the shipbuilder was under contract as of September 2014. In addition, the Navy recently completed an affordability and capability review of CVN 79 in an effort to further reduce construction costs and shipbuilding requirements to ensure that it could meet the $11.5 billion cost cap—which Navy officials stated was otherwise unachievable. In response, the Navy plans to (1) institute cost savings measures by reducing some work and equipment; (2) revise the acquisition strategy to shift more work to post-delivery—including installation of mission systems—while still meeting statutory requirements for deploying CVN 79; and (3) deliver the ship with the same baseline capability as CVN 78—postponing a number of planned mission system upgrades and modernizations until future maintenance periods. Program officials told us they plan to seek approval to initiate these changes at CVN 79’s upcoming program review with the Office of the Secretary of Defense, which is now scheduled for December 2014, in advance of the detail design and construction contract award. Most notably, the Navy plans to depart from its planned installation of DBR on CVN 79, in favor of an alternative radar system, which it expects to provide a better technological solution at a lower cost. By seeking competitively awarded offers, Navy officials anticipate realizing savings of about $180 million for CVN 79. Final determination of CVN 79’s radar solution is not scheduled to occur until after March 2015, at least 3 months after the estimated detail design and construction contract award. It is around this time that the program office anticipates it will solicit proposals from prospective bidders. Program officials told us that they intend to work within the current design parameters of the ship’s island, which they say would limit extensive redesign and reconfiguration work to accommodate the new radar. While the extent of redesign work is unknown, such a change will still result in additional ship construction costs, which could offset the Navy’s estimate of DBR savings. Other cost savings measures are wide ranging and include eliminating one of the four AAG units planned for the ship (Nimitz- class carriers have 3 operational arresting units); eliminating redundant equipment requirements such as the ship’s emergency power unit for the steering gear and spare low pressure air compressors; and modifying test requirements for certain mechanical systems. In addition to these cost savings measures, the CVN 79 program office is proposing a two-phased approach for ship construction and delivery. Although the details of the Navy’s revised acquisition strategy continue to evolve, the basic premise is that delivery by the shipbuilder will consist of only the hull, mechanical and electrical aspects of the ship (referred to as phase I), followed by completion of remaining construction work and installation of the warfare and communications systems during the post- delivery period (referred to as phase II). At ship delivery, CVN 79 will have its full propulsion capability, as well as the core systems for safe navigation and crew safety; and necessary equipment to demonstrate flight deck operations, such as EMALS and AAG. All remaining construction work, primarily consisting of the procurement and installation of several warfare and communications systems, will be completed post- delivery. The program office currently plans to maintain the ship’s 2023 delivery date, but as shown in figure 6, the revised strategy extends the acquisition schedule and the ship’s deployment ready date by about 15 months. Program officials stated that despite this delay in the schedule it would still meet the statutorily required minimum number of operational aircraft carriers because CVN 79 would still be deployment ready shortly after USS Nimitz (CVN 68) is currently slated to retire in fiscal year 2025. As currently planned, the revised strategy, by design, will result in a less capable and less complete ship at delivery. According to CVN 79 program officials, reducing the shipbuilder’s scope of work, along with a reduction in some construction requirements, will lead to negotiating more favorable pricing of the detail design and construction contract. In addition, they noted that maintaining the current delivery schedule will deliberately allow for a slower pace of construction, thus potentially requiring less use of overtime or leased labor. Further, program officials state that delaying installation of warfare and communications systems—such as those systems with high obsolescence risk—can potentially limit procuring equipment that has been surpassed by technology advances by the time the ship begins phase II of the Navy’s revised strategy. Finally, Navy officials believe that adopting this approach will enable the program to reduce costs by introducing additional competition for the ship’s systems and installation work after delivery. While the two-phased strategy may enable the program to initially stay within the legislated cost cap, it will transfer the costs of a number of known capability upgrades previously included in the CVN 79 baseline to other (non-CVN 79 shipbuilding) accounts. As shown in table 10, the program office plans to defer installation of a number of systems to future maintenance periods. Based on current estimates, the value of the deferred systems is about $200 million - $250 million. Moreover, this strategy will result in deferring installation of systems and equipment needed to accommodate the carrier variant of the Joint Strike Fighter aircraft (F-35C) until fiscal year 2027 at the earliest. Further, should construction costs grow above estimates, the Navy may subsequently choose to use funding intended for phase II work to pay for construction cost increases without increasing the cost cap. The Navy would have this option because additional funding through post delivery budget accounts are not included in calculating the ship’s end cost, similar to the aforementioned situation with CVN 78. According to Navy officials, this approach allows the program to manage the cost cap without seeking statutory authority. Constructing and delivering an aircraft carrier is a complex undertaking. The Ford-class program, in particular, has faced a steep challenge due to the development, installation and integration of numerous technologies— coupled with optimistic budget and schedule. The Ford class is intended to provide significant operational advantages over the Nimitz class. However, with about 80 percent of the lead ship constructed, the program continues to struggle with construction inefficiencies, development issues, testing delays, and reliability shortfalls. These are issues that have been mounting for a number of years. Now, as the program embarks on its most challenging phase—shipboard testing—additional cost increases in excess of the $2.3 billion since 2009 appear likely. To manage this risk, the Navy is creating a cost buffer by deferring construction work and installation of mission-related systems to the post-delivery period. This strategy may provide a funding cushion in the near term, but it may not be sufficient to cover all potential cost increases. After raising the cost cap several times, the Navy is now managing the cost cap by reducing the scope of the delivered ship and is considering paying for the deferred scope through a budget account normally used for post-delivery activities. This contradicts the purpose of the congressional cost cap, which is to hold the Navy accountable for the total cost estimate for buying a deployable ship. Further, after an investment of at least $12.9 billion, CVN 78 may not achieve improved operational performance over the Nimitz class of aircraft carriers as promised for some time to come. Reliability shortfalls and development uncertainties in key Ford-class systems will prevent the ship from demonstrating its required sortie generation rate before initial deployments. Personnel accommodation restrictions resulting from the ship’s design has the potential of causing operational limitations that the Navy will have to manage closely—a constraint that does not exist in the Nimitz class. We previously recommended re-assessing these requirements; the Navy agreed that such an analysis is appropriate, but one that it would not pursue until the conclusion of operational testing. As we previously concluded, waiting until this point would be too late to make effective tradeoffs among cost, schedule, and performance for follow-on ships. As the Navy prepares to award the detail design and construction contract for the next Ford-class ship, CVN 79, it is clear that achieving the cost cap will be challenging. While the Navy and the shipbuilder are working to reduce costs, the Navy’s ability to achieve the congressional cost cap relies, in part, on deferring planned capability improvements until later maintenance periods. From an accountability and oversight standpoint, it would be preferable to keep the scope of the delivered ship constant—an essential component of a baseline—and raise the cost cap accordingly. The legislated cost cap for Ford-class aircraft carrier construction provides a limit on procurement funds. However, the legislation also provides for adjustments to the cost cap. To understand the true cost of each Ford-class ship, Congress should consider revising the cost cap legislation to ensure that all work included in the initial ship cost estimate that is deferred to post-delivery and outfitting account is counted against the cost cap. If warranted, the Navy would be required to seek statutory authority to increase the cap. We are not making any new recommendations, but our recommendations from our September 2013 report remain valid. We provided a draft of this report to DOD for comment. In its written comments, which are reprinted in appendix III, DOD agreed with much of the report but disagreed with our position on cost cap compliance. In particular, DOD disagreed that a change in cost cap legislation is necessary because it believes all procurement funds are counted toward the cost cap. While it is true that the current cost cap legislation does require the inclusion of all procurement funds, up to this point the Navy has not included funding for outfitting and post delivery costs in its end cost estimates. Further, the current legislation allows the Navy to make changes to the ships’ outfitting and post-delivery budget accounts without first seeking statutory authority. In the event that costs increase above the Navy’s current estimates, the Navy is considering deferring work until the post-delivery period and funding it through the outfitting and post delivery accounts, which would limit visibility into the ship’s true end cost. Our intention is not necessarily, as DOD states, to keep the post-delivery and procurement accounts separate, but rather to create a stable cost baseline for accountability and oversight purposes. DOD also disagreed with our conclusion that constructing CVN 79 within the current cost cap might not be achievable, but agreed that it will be challenging. DOD stated that the cost cap for CVN 79 is achievable largely due to the Navy’s two-phased acquisition approach, which is now intended to deliver the next carrier with the same capabilities as CVN 78. We agree that reducing the scope of CVN 79 prior to ship delivery should also reduce the cost estimate in the near term. As we noted in our report, however, the Navy initially included planned capability improvements in CVN 79’s baseline estimate. These improvements will now occur during a later maintenance period, the costs of which are to be shifted to other (non-CVN 79 shipbuilding) accounts at a later date. While the Navy’s approach to CVN 79’s cost estimate may initially appear to meet the cost cap, it serves to obscure the ship’s true cost. As we concluded in the report, from an accountability standpoint, it would be preferable to keep the scope of CVN 79 constant and raise the cost cap accordingly, if needed. In addition, DOD provided technical comments that were incorporated as appropriate. These comments included, among others, additional information on CVN 78’s shipboard test program and the Navy’s two- phased approach to constructing and delivering CVN 79. We are sending copies of this report to interested congressional committees, the Secretary of Defense, and the Secretary of the Navy. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This report examines remaining risks in the CVN 78 program since September 2013 by assessing: (1) the extent to which CVN 78 will be delivered to the Navy within its revised cost and schedule goals; (2) if, after delivery, CVN 78 will demonstrate its required capabilities through testing before the ship is deployment ready; and (3) the steps the Navy is taking to achieve CVN 79 cost goals. To identify challenges in delivering the lead ship within current budget and schedule estimates, we reviewed Department of Defense (DOD) and contractor documents that address technology development efforts including test reports and program schedules and briefings. We also visited the lead ship of the Ford-class carriers, USS Gerald R. Ford (CVN 78), to observe construction progress and improve our understanding of the installation progress of the critical technologies aboard CVN 78. We evaluated Navy and contractor documents outlining cost and schedule parameters for CVN 78 Navy budget submissions, contract performance reports, quarterly performance reports, and program schedules and briefings. In addition, we reviewed the shipbuilder’s Earned Value Management data and developed our own cost and labor hour estimates at ship completion and compared this with data provided by the Navy and shipbuilder. We also relied on our prior work evaluating the Ford-class program and shipbuilding best practices to supplement the above analyses. To further corroborate documentary evidence and gather additional information in support of our review, we conducted interviews with relevant Navy and contractor officials responsible for managing the technology development and construction of CVN 78, such as the Program Executive Office, Aircraft Carriers; CVN 78 program office; Newport News Shipbuilding (a division of Huntington Ingalls Industries); Supervisor of Shipbuilding, Conversion, and Repair Newport News Command; Aircraft Launch and Recovery program office; and the Program Executive Office, Integrated Warfare Systems. We also held discussions with the Naval Sea Systems Command’s Cost Engineering and Industrial Analysis Division; the Defense Contract Management Agency; and the Defense Contract Audit Agency. To evaluate whether CVN 78 will demonstrate its required capabilities, we identified requirements criteria in the Future Aircraft Carrier Operational Requirements Document and compared requirements with reliability data and reliability growth projections for key systems. We also examined the CVN 78 preliminary ship’s manning document and wargame analysis of planned manning, as well as the Commander, Operational Test and Evaluation Force’s most recent operational assessment for the ship to identify potential manpower shortfalls. To evaluate whether the Navy’s post-delivery test and evaluation strategy will provide timely demonstration of required capabilities, we analyzed (1) development schedules and test reports for CVN 78 critical technologies; (2) testing reports and operational assessments for CVN 78; and (3) the Navy’s November 2013 revised test and evaluation master plan to identify concurrency among development, integration, and operational test plans. We corroborated documentary evidence by meeting with Navy and contractor officials responsible for developing key systems, managing ship testing, and conducting operational testing, including the Program Executive Office-Aircraft Carriers, the CVN 78 program office, Newport News Shipbuilding, the Aircraft Launch and Recovery program office, the Navy’s land-based test site for EMALS and AAG in Lakehurst, N.J., the Program Executive Office for Integrated Warfare Systems, Office of the Director, Operational Test and Evaluation, Office of the Deputy Assistant Secretary of Defense for Developmental Test and Evaluation, the Office of the Commander, Navy Operational Test and Evaluation Force, and the Office of the Chief of Naval Operations Air Warfare. To assess the steps the Navy is taking to achieve CVN 79 cost goals, we reviewed our prior work on Ford-class carriers; shipbuilder data identifying cost savings and labor hour reduction opportunities as well as lessons learned from constructing CVN 78; CVN 79 construction preparation contract and contract extensions; CVN 78 and CVN 79 labor hour data for completing advanced construction work; as well as, CVN 79 construction plans and reports, program briefings, and Navy budget submissions. We also conducted an analysis of the shipbuilder’s scheduling systems and processes that are used for constructing CVN 78 and assessed this against GAO’s scheduling best practices. We attempted to conduct a similar analysis of CVN 79’s schedule. However, the integrated master schedule used for construction—that is maintained by the shipbuilder—was not up to date and did not reflect the status of advanced construction work at the time of our analysis. As a result, we only reviewed the scheduling processes that the shipbuilder plans to use for CVN 79. To supplement our analysis and gain additional visibility into the Navy’s actions for ensuring CVN 79 is built within the constraints of the cost cap legislation, we reviewed several years of defense authorization acts and interviewed officials from the Program Executive Office-Aircraft Carriers, CVN 78 program office, CVN 79 and CVN 80 program office; Huntington Ingalls Industries, Newport News Shipbuilding; Supervisor of Shipbuilding, Conversion, and Repair Newport News Command; Program Executive Office, Integrated Warfare Systems; the Office of the Chief of Naval Operations Air Warfare Division; and, Naval Sea Systems Command’s Cost Engineering and Industrial Analysis Division. We conducted this performance audit from December 2013 to November 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. A number of new technologies are being installed on Ford-class aircraft carriers that are designed to increase the ship’s capability and lower life cycle costs. Below is an overview of these key technologies along with the approximate placement on the ship. In addition to the contact named above, key contributors to this report were Diana Moldafsky, Assistant Director; Christopher E. Kunitz; Brian P. Bothwell, Juana S. Collymore, Burns C. Eckert; Laura Greifner; John A. Krump; Jean L. McSween; Karen Richey; Jenny Shinn; and Oziel Trevino. | Ford-class aircraft carriers will feature new technologies designed to reduce life-cycle costs. The lead ship, CVN 78, has been under construction since 2008, and early construction on CVN 79 is underway. In 2007 Congress established a cap for procurement costs—which has been adjusted over time. In September 2013, GAO reported on a $2.3 billion increase in CVN 78 construction costs. GAO was mandated to examine risks in the CVN 78 program since its September 2013 report. This report assesses (1) the extent to which CVN 78 will be delivered within revised cost and schedule goals; (2) if CVN 78 will demonstrate its required capabilities before ship deployment; and (3) the steps the Navy is taking to achieve CVN 79 cost goals. To perform this work, GAO analyzed Navy and contractor data, and scheduling best practices. The extent to which the lead Ford-class ship, CVN 78, will be delivered by its current March 2016 delivery date and within the Navy's $12.9 billion estimate is dependent on the Navy's plan to defer work and costs to the post-delivery period. Lagging construction progress as well as ongoing issues with key technologies further exacerbate an already compressed schedule and create further cost and schedule risks. With the shipbuilder embarking on one of the most complex phases of construction with the greatest likelihood for cost growth, cost increases beyond the current $12.9 billion cost cap appear likely. In response, the Navy is deferring some work until after ship delivery to create a funding reserve to pay for any additional cost growth stemming from remaining construction risks. This strategy will result in the need for additional funding later, which the Navy plans to request through its post-delivery and outfitting budget account. However, this approach obscures visibility into the true cost of the ship and results in delivering a ship that is less complete than initially planned. CVN 78 will deploy without demonstrating full operational capabilities because it cannot achieve certain key requirements according to its current test schedule. Key requirements—such as increasing aircraft launch and recovery rates—will likely not be met before the ship is deployment ready and could limit ship operations. Further, CVN 78 will not meet a requirement that allows for increases to the size of the crew over the service life of the ship. In fact, the ship may not even be able to accommodate the likely need for additional crew to operate the ship without operational tradeoffs. Since GAO's last report in September 2013, post-delivery plans to test CVN 78's capabilities have become more compressed, further increasing the likelihood that CVN 78 will not deploy as scheduled or will deploy without fully tested systems. The Navy is implementing steps to achieve the $11.5 billion congressional cost cap for the second ship, CVN 79, but these are largely based on ambitious efficiency gains and reducing a significant amount of construction, installation, and testing—work traditionally completed prior to ship delivery. Since GAO last reported in September 2013, the Navy extended CVN 79's construction preparation contract to allow additional time for the shipbuilder to reduce cost risks and incorporate lessons learned from construction of CVN 78. At the same time, the Navy continues to revise its acquisition strategy for CVN 79 in an effort to ensure that costs do not exceed the cost cap, by postponing installation of some systems until after ship delivery, and deferring an estimated $200 million - $250 million in previously planned capability upgrades of the ship's combat systems to be completed well after the ship is operational. Further, if CVN 79 construction costs should grow above the legislated cost cap, the Navy may choose to use funding intended for work to complete the ship after delivery to cover construction cost increases. As with CVN 78, the Navy could choose to request additional funding through post-delivery budget accounts not included in calculating the ship's end cost. Navy officials view this as an approach to managing the cost cap. However, doing so impairs accountability for actual ship costs. Congress should consider revising the cost cap legislation to improve accountability of Ford-class construction costs, by requiring that all work included in the initial ship cost estimate is counted against the cost cap. If warranted, the Navy would be required to seek statutory authority to increase the cap. GAO is not making new recommendations, but believes previous recommendations, including a re-examination of requirements and improvements to the test plan, remain valid. DOD agreed with much of the report, but disagreed with GAO's position on the cost caps. GAO believes that changes to the legislation are warranted to improve cost accountability. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
DOD is one of the largest and most complex organizations in the world. In support of its military operations, the department performs an assortment of interrelated and interdependent business functions, including logistics management, procurement, health care management, and financial management. As we have previously reported, the DOD systems environment that supports these business functions is complex and error prone, and is characterized by (1) little standardization across the department, (2) multiple systems performing the same tasks, (3) the same data stored in multiple systems, and (4) the need for data to be manually entered. For fiscal year 2015, the department requested about $10 billion for its business system investments. According to the department, as of April 2015, its environment includes approximately 2,179 business systems. Of these systems, DOD reports that, for fiscal year 2015, the department approved certification requests for 1,182 business systems covered by the fiscal year 2005 NDAA’s certification and approval requirements. Figure 1 shows how many of these 1,182 covered systems are associated with each functional area. DOD currently bears responsibility, in whole or in part, for about half (17 of 32) of the programs across the federal government that we have designated as high risk. Seven of these areas are specific to the department, and 10 other high-risk areas are shared with other federal agencies. Collectively, these high-risk areas in major business operations are linked to the department’s ability to perform its overall mission and affect the readiness and capabilities of U.S. military forces. As such, DOD’s business systems modernization is one of the department’s specific high-risk areas and is essential for addressing many of the department’s other high-risk areas. For example, modernized business systems are integral to the department’s efforts to address its financial, supply chain, and information security management high-risk areas. Congress included provisions in the fiscal year 2005 NDAA, as amended, that are aimed at ensuring DOD’s development of a well-defined business enterprise architecture and associated enterprise transition plan, as well as the establishment and implementation of effective investment management structures and processes. The act requires DOD to, among other things, establish an investment approval and accountability structure along with an investment review process; not obligate funds for a defense business system program with a total cost in excess of $1 million over the period of the current future-years defense program unless the approval authority certifies that the business system program meets specified conditions, including complying with the business enterprise architecture and having appropriate business process reengineering conducted; develop a business enterprise architecture that covers all defense develop an enterprise transition plan for implementing the architecture, and identify systems information in DOD’s annual budget submissions. The fiscal year 2005 NDAA also requires that the Secretary of Defense submit an annual report to the congressional defense committees on the department’s compliance with these provisions. DOD submitted its most recent annual report to Congress on April 6, 2015, describing steps taken, under way, and planned to address the act’s requirements. DOD’s approach to business systems modernization includes reviewing systems annually to ensure that they comply with the fiscal year 2005 NDAA’s business enterprise architecture and business process reengineering requirements. This effort includes both a certification of compliance by lower-level department authorities and an approval of this certification by higher-level department authorities. According to the act, this certification and approval is to occur before systems are granted permission to obligate funds for a given fiscal year. These efforts are to be guided by DOD’s Chief Management Officer (CMO) and Deputy Chief Management Officer (DCMO). Specifically, the CMO’s responsibilities include developing and maintaining a departmentwide strategic plan for business reform; establishing performance goals and measures for improving and evaluating overall economy, efficiency, and effectiveness; and monitoring and measuring the progress of the department. The DCMO’s responsibilities include recommending to the CMO methodologies and measurement criteria to better synchronize, integrate, and coordinate the business operations to ensure alignment in support of their warfighting mission and developing and maintaining the department’s enterprise architecture for its business mission area. Table 1 describes selected roles and responsibilities and the composition of key governance entities and positions related to business systems modernization as they were documented for the fiscal year 2015 business system certification and approval cycle. Within the military departments, the entities described in table 1 are supported by portfolio managers who oversee groups of business system investments within specific functional areas. For example, the Department of the Navy’s financial management portfolio manager is responsible for overseeing the Navy’s portfolio of financial management systems. In order to manage and oversee the department’s business operations and approximately 1,180 covered defense business systems, the Office of the DCMO developed the Integrated Business Framework. According to officials from the office, this framework is used to align the department’s strategic objectives—laid out in the National Security Strategy, Quadrennial Defense Review, and Strategic Management Plan—with its defense business system investments. Using the overarching goals of the Strategic Management Plan, principal staff assistants developed six functional strategies that cover nine functional areas. These functional strategies are to define business outcomes, priorities, measures, and standards for a given functional area within DOD. The business objectives and compliance requirements laid out in each functional strategy are to be integrated into the business enterprise architecture. The precertification authorities in the Air Force, Navy, Army, and other departmental organizations use the functional strategies to guide the development of organizational execution plans, which are to summarize each component’s business strategy for each functional area. Each plan includes a description of how the component’s goals and objectives align with those in the functional strategies and the Strategic Management Plan. In addition, each organizational execution plan includes a portfolio of defense business system investments organized by functional area. The components submit each of these portfolios to the Defense Business Council for certification on an annual basis. According to the department’s 2015 Congressional Report on Defense Business Operations, for the fiscal year 2015 certification and review cycle, the department empowered the military department chief management officers to manage their business systems portfolios and conduct portfolio reviews. Results were presented to the Defense Business Council and were to address topics such as major improvements and cost reductions, return on investment, risks and challenges, deviations from prior plans, and future goals. According to DOD’s investment management guidance, for the fiscal year 2015 certification and approval cycle, the Defense Business Council was to review the organizational execution plans and associated portfolios based on four investment criteria—compliance, strategic alignment, utility, and cost—to determine whether or not to recommend the portfolio for certification of funding. The Vice Chairman of the Deputy’s Management Action Group/Defense Business Systems Management Committee was to approve certification decisions and then document the decision in an investment decision memorandum. These memoranda were to indicate whether an individual organizational execution plan has been certified; conditionally certified (i.e., obligation of funds has been certified and approved but may be subject to conditions that restrict the use of funds, a time line for obligation of funds, or mandatory changes to the portfolio of business systems); or not certified (i.e., certification is not approved due to misalignment with strategic direction, mission needs, or other deficiencies). DOD’s business enterprise architecture is intended to serve as a blueprint for the department’s business transformation efforts. In particular, the architecture is to guide and constrain implementation of interoperable defense business systems by, among other things, documenting the department’s business functions and activities and the business rules, laws, regulations, and policies associated with them. According to DOD, its architecture is being developed using an incremental approach, where each new version of the architecture addresses business mission area gaps or weaknesses based on priorities identified by the department. The department’s business enterprise architecture focuses on documenting information associated with its end-to-end business process areas (e.g., hire-to-retire and procure-to-pay). These end-to-end business process areas may occur across the department’s nine functional areas. For example, hire-to-retire occurs within the human resources management functional area, while the cost management business process area occurs across the acquisition, financial management, human resources management, installations and environment, and logistics and materiel readiness functional areas. According to DOD officials, the current approach to developing the business enterprise architecture is both a “top down” and “bottom-up” approach. Specifically, the architecture focuses on developing content to support investment management and strategic decision making and oversight (top down) while also responding to department needs associated with supporting system implementation, system integration, and software development (bottom up). Consistent with DOD’s tiered approach to business systems management, the department’s approach to developing its business enterprise architecture involves the development of a federated enterprise architecture, where member architectures (e.g., Air Force, Army, and Navy) conform to an overarching corporate or parent architecture and use a common vocabulary. This approach is to provide governance across all business systems, functions, and activities within the department and improve visibility across the respective efforts. DOD defines business process reengineering as a logical methodology for assessing process weaknesses, identifying gaps, and implementing opportunities to streamline and improve the processes to create a solid foundation for success in changes to the full spectrum of operations. DOD’s reengineering efforts are intended to help the department rationalize its covered business system portfolio, improve its use of performance management, control scope changes, and reduce the cost of fielding business capability. According to DOD officials, the department has taken a holistic approach to business process reengineering, which includes a portfolio and end-to-end perspective. It has also issued business process reengineering guidance that calls for alignment of defense business systems within the Organizational Execution Plan to its functional strategy’s strategic goals. An important component of the department’s business process reengineering efforts is the problem statement development and review process. A problem statement is developed when a defense business system is seeking certification for a development or modernization effort. The statement is to include, among other things, a description of the problem that the system intends to address and a discussion of the costs, benefits, and risks of various alternatives that were considered. As part of the annual certification and approval process, problem statements are to be reviewed to support that appropriate business process reengineering has been conducted on investments seeking certification. The department has implemented 5 of the 16 recommendations that GAO has made since June 2011 to address each of the overarching provisions for improving business systems management in the fiscal year 2005 NDAA. The fiscal year 2005 NDAA, as amended, includes provisions associated with developing a business enterprise architecture and enterprise transition plan, improving the department’s investment management structures and processes, improving its efforts to certify defense business systems, and mandated budgetary reporting. Since 2011, we have issued four reports in response to the act’s requirement that we assess the actions taken by the department to comply with the In those reports, we have made recommendations to act’s provisions.address each of the act’s overarching provisions for improving business systems management. Table 2 identifies the recommendations we have made since 2011 associated with the fiscal year 2005 NDAA. Table 3 presents a summary of the current status of these recommendations. Appendix II provides additional information about the status of each recommendation. As of April 2015, the department had implemented 5 of the 16 recommendations that we have made since June 2011. For example, the department has implemented the recommendation to improve its reporting of business system data in its annual budget request. In particular, the department has established common elements in its three primary repositories used for tracking information about business systems, which allows information about individual business systems to be matched across systems. In addition, the Office of the CIO demonstrated that it conducts periodic data quality assessments. As a result, the department is better positioned to report more reliable information in its annual budget request and to maintain more accurate information about business systems to support its efforts to manage them. In addition, the department has improved the alignment of its Planning, Programming, Budgeting, and Execution process with its business systems certification and approval process. For example, according to the department’s February 2015 certification and approval guidance, Organization Execution Plans are to include information about certification requests for the upcoming fiscal year as well as over the course of the Future Years Defense Program. As a result, the department’s business system certification and approval process can support better informed decisions about system certifications and inform recommendations on the resources provided to defense business systems as part of the Planning, Programming, Budgeting, and Execution process. The department has partially implemented the remaining 11 recommendations. For example, the department’s February 2015 investment management guidance, which describes DOD’s business system certification and approval process, identifies four criteria and specifies the associated assessments that are to be conducted when reviewing and evaluating component-level organizational execution plans in order to make a portfolio-based investment decision. The guidance also states that return on investment should be considered when evaluating program cost. However, it does not call for the use of actual- versus-expected performance data and predetermined thresholds. Further, the Office of the DCMO has developed a draft resource allocation plan for each of its directorates and their respective divisions. This draft plan includes staffing profiles that describe each division’s needed staff competencies and qualifications. However, the Office of the DCMO did not demonstrate that it has addressed other important aspects of strategic human capital planning. For example, the office did not demonstrate that it has developed a skills inventory, needs assessment, gap analysis, and plan to address identified gaps, as called for by our recommendation. Appendix II provides additional information about the recommendations that DOD has fully and partially implemented. Implementing the remaining 11 recommendations will improve DOD’s modernization management controls and help fulfill the department’s execution of the requirements of the act. DOD’s business enterprise architecture and process reengineering efforts are not fully achieving the intended outcomes described in statute. More specifically, with respect to the architecture, portfolio managers (managers) we surveyed reported that it was generally not effective in achieving its intended outcomes and that its usefulness in achieving benefits, such as reducing the number of applications, was limited. With respect to process reengineering, managers reported these efforts were moderately effective at streamlining business processes, but less so in limiting the need to tailor commercial off-the-shelf systems. Portfolio managers cited a number of challenges impeding the usefulness and effectiveness of these two initiatives, such as the availability of training, lack of skilled staff, parochialism, and cultural resistance to change. DOD has various improvement efforts under way to address some of these challenges; however, additional work is needed and the managers provided some suggestions for closing the gap. More fully addressing the cited challenges would help increase the utility and effectiveness of these initiatives in driving greater operational efficiencies and savings. Appendix I provides additional details about our survey methodology. The fiscal year 2005 NDAA, as amended, requires DOD to develop a business enterprise architecture that covers all defense business systems and will be used as a guide for these systems. According to the act, the architecture is intended to help achieve the following outcomes: Enable DOD to comply with all applicable laws, including federal accounting, financial management, and reporting requirements. Guide, permit, and constrain the implementation of interoperable defense business systems. Enable DOD to routinely produce timely, accurate, and reliable business and financial information for management purposes. Facilitate the integration of budget, accounting, and program information and systems. Provide for the systematic measurement of performance, including the ability to produce timely, relevant, and reliable cost information. The act also specifies that the department is not to obligate funds for defense business system programs that have a total cost in excess of $1 million unless the system’s approval authority certifies that the program complies with the business enterprise architecture and the certification is subsequently approved by the department’s Investment Review Board. Achieving the act’s intended outcomes would contribute to the department’s ability to use the architecture to realize important benefits that we and others have previously identified, such as cost savings or For example, if the architecture effectively guides, permits, avoidance.and constrains the implementation of interoperable systems, that would contribute to increased information sharing and improved system interoperability. As another example, using the architecture to produce timely and reliable business and financial information would contribute to improving management decisions associated with enhanced productivity and improved business and IT alignment, among other things. The majority of DOD portfolio managers we surveyed reported that the business enterprise architecture has not been effective in meeting its intended outcomes. More specifically, half of the managers surveyed reported that the business enterprise architecture was effective in enabling compliance with all applicable laws. However, fewer than 40 percent reported that the architecture was effective in helping to achieve the other outcomes called for by the fiscal year 2005 NDAA. Table 4 provides additional information on survey responses regarding the act’s specific requirements. Portfolio managers provided additional details to further explain their survey responses. Their comments included the following: The architecture is a standalone effort that does not drive comprehensive portfolio and business management through the various DOD components. The architecture is overwhelming to review and is not integrated with other activities that occur throughout the remainder of the year. The compliance requirements are not sufficiently defined to enable system interoperability. Portfolio managers also reported that the usefulness of DOD’s business enterprise architecture in achieving various potential benefits is limited. For example, 75 percent reported limited achievement of improved change management and 74 percent reported limited achievement of streamlined end-to-end business processes. In addition, 71 percent reported limited achievement of benefits such as a reduced number of applications, improved business and IT alignment, enhanced productivity, and achieving financial benefits such as cost savings or cost avoidance. Table 5 summarizes the portfolio managers’ survey responses. Although managers reported limited achievement of benefits, two provided specific examples of individual benefits associated with the business enterprise architecture. More specifically, one cited saving $10 million annually due to the establishment of a DOD-wide military housing system that has replaced a number of individual systems. A second reported $11.5 million in architecture-related savings through the retirement of 48 real property and financial management systems. In addition, officials from the Office of the DCMO provided specific examples of benefits that they stated can be attributed, at least in part, to the department’s business architecture. For example, according to these officials, two proposed new defense business system investments were not approved by DOD due, in part, to architecture reviews that revealed the requested capabilities were already available in existing systems. The surveyed DOD portfolio managers reported that their functional areas face many challenges in achieving the outcomes described in the NDAA for fiscal year 2005. The most frequently cited challenges reported were the usability of the compliance tool (79 percent), frequent changes to the architecture (75 percent), the availability of training (71 percent), the availability of skilled staff (71 percent), parochialism (67 percent), and cultural resistance to change (63 percent). Table 6 identifies the survey responses to achieving the architecture’s intended outcomes. Officials from the Office of the DCMO, including the Lead Architect for the business enterprise architecture and the Chief of Portfolio Management, described various efforts under way to address selected challenges identified in our survey results. With regard to the top ranked challenge (usability of DOD’s architecture compliance tool), the office has been working on a more robust replacement tool. As of April 2015, the office had moved architecture content and associated compliance information from its previous tool into its Integrated Business Framework-Data Alignment Portal. Further, the department plans to require all fiscal year 2016 compliance assessments to be completed in this portal environment. According to officials from the Office of the DCMO, this change will help ensure that architecture-related information is available in the same place, which will help support more sophisticated analysis of information about business systems. For example, by combining information about the architecture, compliance information, functional strategies, and organizational execution plans, the department could more easily conduct analyses that will help support portfolio management. According to these officials, examples of such analyses include the ability to identify the funds certified and approved for various business activities and the ability to identify systems that conduct similar system functions. With regard to the challenge associated with limited alignment between corporate and component architectures, the officials from the Office of the DCMO stated that they intend to develop an overarching (or federated) architecture that will capture content from, and allow governance across, the department (e.g., Army, Navy, and Air Force). We previously recommended that DOD establish a plan for how it will address business enterprise architecture federation in 2013. The department’s improvement efforts only address selected reported challenges. However, portfolio managers offered a number of suggestions that relate to other identified challenges that may help close gaps in these efforts. Key suggestions included: Improve tools: Four of 24 managers offered suggestions that relate to compliance tool usability. For example, one portfolio manager stated that functionality should be added to the architecture compliance tool to automatically create and build the architecture artifacts mentioned in compliance guidance using the information already included in the tool for each system. Another portfolio manager stated that there are no tools available that portfolio managers can use to analyze their portfolios relative to the architecture. Provide additional training: Two managers offered suggestions associated with additional training. For example, one manager reported that the compliance tool is not user friendly and little to no training was offered when programs were required to use it to assert compliance. As a result, this manager added that more training should be made available for using the compliance tool. Start the process earlier in a system’s life cycle: One manager suggested the architecture be addressed earlier in the acquisition life cycle, such as in the analysis of alternatives phase, in order to help assess whether existing solutions are already employed in other areas of the enterprise. If the architecture compliance process uncovers potential duplication or overlap, it might be easier to stop development of a duplicative system earlier in its life cycle rather than waiting until a business process is more reliant on a planned system that is closer to becoming operational. Establish priorities: One portfolio manager suggested that the department develop departmental business improvement and integration priorities and develop clearly understandable and verifiable compliance standards that will guide and constrain systems development to help achieve those priorities. Improve guidance: Two managers suggested that the department improve its guidance to clarify the documentation that systems developed prior to the existence of the business enterprise architecture are required to prepare to address the business enterprise architecture compliance requirement. Improve content: Seven managers offered suggestions associated with improving content. For example, one manager stated that the business enterprise architecture is large and cumbersome and incomplete in many areas. Addressing the challenges cited by the portfolio managers could help increase the utility and effectiveness of the department’s business enterprise architecture in driving greater operational efficiencies and cost savings. The fiscal year 2005 NDAA, as amended, establishes expected outcomes for the department’s business process reengineering efforts. The act states that funds for covered business system programs cannot be certified and approved unless each program’s pre-certification authority has determined that, among other things, appropriate business process reengineering efforts have been undertaken to ensure that the business process supported by the program is, or will be, as streamlined and efficient as practicable and the need to tailor commercial off-the-shelf systems to (a) meet unique requirements, (b) incorporate unique requirements, or (c) incorporate unique interfaces has been eliminated or reduced to the maximum extent practicable. As we have previously reported, modifications to commercial off-the-shelf systems should be avoided to the extent practicable as they can be costly to implement. Achieving the intended outcomes of the fiscal year 2005 NDAA would increase the department’s ability to realize key benefits to business systems modernization. For example, reengineering business processes to be as streamlined as possible can result in increased efficiencies, a reduced number of interfaces, and decreased program costs. The department’s business process reengineering efforts have had mixed success in achieving their intended outcomes. Specifically, 63 percent of the portfolio managers we surveyed reported that the efforts were effective in helping to ensure that the business processes supported by the defense business systems they manage are (or will be) streamlined and efficient as practicable. As an example, one manager reported this effort highlighted the strengths and weaknesses of the systems within their specific portfolio. Another reported that their portfolio has been reduced from 147 systems to 13 due, in part, to the business process reengineering efforts. However, the general consensus among surveyed portfolio managers was that the department’s efforts were less effective in helping to limit tailoring of commercial off-the-shelf systems. Only 29 percent reported that DOD’s business process reengineering efforts were effective in eliminating or reducing the need to tailor commercial off-the-shelf systems. Tailoring might be required, for example, because existing policy and guidance might limit a system’s ability to conform to a specific approach for executing a business process that is already built into an individual commercial off-the-shelf system. Another reason given was that managers have limited knowledge about the commercial off-the-shelf products that are available via established enterprise licenses and this limited knowledge makes it difficult to conduct effective business process reengineering. Table 7 provides additional information on portfolio managers’ responses regarding the effectiveness of DOD’s business process reengineering efforts. Portfolio managers reported that business process reengineering has been useful in helping to achieve selected benefits. In particular, 70 percent reported that efforts have resulted in streamlined business processes. Sixty-seven percent reported that efforts have resulted in improved documentation of business needs, which is consistent with DOD’s focus on developing problem statements for new capabilities. Such problem statements reflect analysis of a perceived business problem, capability gap, or opportunity. According to officials from the Office of the DCMO, they help ensure that programs are aligned with DOD’s strategic needs, and also assist the department’s efforts in identifying redundancies and duplication. However, only 29 percent of the portfolio managers surveyed reported that efforts to reduce program costs have been effective. Table 8 summarizes the portfolio managers’ survey responses. The surveyed DOD portfolio managers identified a range of challenges to fully achieving the business process reengineering outcomes described in the fiscal year 2005 NDAA. In particular, cultural resistance to change was the most frequently cited challenge (71 percent), followed by parochialism (i.e., focusing on one’s own sub-organization rather than having an enterprise-wide view.), availability of skilled staff, and availability of training (all at 67 percent). The quality of business process reengineering compliance guidance, the compliance review process, and the timing of the reengineering relative to system development work were also reported as important challenges (all at 63 percent). Table 9 summarizes survey responses to questions about the challenges to business process reengineering. DOD has taken steps to improve its reengineering efforts that may, in part, address some of the challenges identified in our survey results. With regard to parochialism (i.e., focusing on one’s own sub-organization rather than having an enterprisewide view), the department is developing online tools that provide additional information to program managers, portfolio managers, pre-certification authorities, and the Defense Business Council. For example, the department’s problem statement portal is to be a repository for problem statement submissions and is to be available departmentwide. In addition, the department has developed its Integrated Business Framework-Data Alignment Portal, which is to provide, among other things, additional information about individual business systems, such as information about which systems execute specific business activities and system functions. Further, with respect to addressing the challenge associated with the business process reengineering compliance review process, the department has taken steps to help ensure improved accountability for a portion of certification and approval requests. In particular, according to officials from the Office of the DCMO, the DCMO allowed the military departments more autonomy and responsibility for reviewing their system portfolios during fiscal year 2015 certification and approval reviews. Nevertheless, as we have previously reported, and as discussed in appendix II, this process is not guided by specific criteria for elevating certain systems to the Defense Business Council that might require additional oversight. Notwithstanding these improvement efforts, as reported in feedback by the military department portfolio managers, additional work is needed. These managers provided a number of suggestions to help address the identified challenges. Suggestions included: Improve business process reengineering training: Two portfolio managers offered suggestions that relate to improved training. For example, one manager stated that the department should establish minimum training standards. Improve business process reengineering guidance: Two managers offered suggestions associated with improved guidance. For example, one portfolio manager stated that sufficient guidance does not exist to describe meaningful business process models or how such models should be analyzed. Align business process reengineering with system development activities: One portfolio manager stated that the reengineering process should be more closely tied to acquisition milestones instead of being assessed on an annual basis. According to GAO’s standards for internal controls,ensure that there are adequate means of obtaining information from stakeholders that may have a significant impact on the agency achieving its goals. While we did not evaluate the effectiveness of these suggestions, they may be valuable for the Office of the DCMO to consider in its ongoing and future business process reengineering improvement efforts. More fully addressing the challenges cited by the portfolio managers would help the department achieve better outcomes, including limiting the tailoring of commercial off-the-shelf systems. DOD has made progress in improving its compliance with section 332 of the NDAA for fiscal year 2005, as amended. Specifically, the department has implemented 5 of the 16 recommendations that we have made since 2011 that are consistent with the requirements of the act and has partially implemented the remaining 11 recommendations. The recommendations not fully implemented relate to improving the department’s investment management processes and efforts to certify defense business systems, among other things. Fully implementing them will help improve DOD’s modernization management controls and fulfill the department’s execution of the act’s requirements. Collectively, DOD’s business enterprise architecture and business process reengineering efforts show mixed results in their effectiveness and usefulness in achieving the intended outcomes and benefits. Among other things, portfolio managers reported that the architecture does not enable DOD to produce reliable and timely information for decision- making purposes. Additionally, DOD’s reengineering efforts are effective in streamlining business processes, but not in reducing the tailoring of commercial software products. Portfolio managers reported that various challenges exist in achieving intended outcomes and benefits, including cultural resistance, parochialism, and a lack of skilled staff. DOD has various improvement efforts under way; however, gaps exist and portfolio managers provided suggestions on how to close some of them. Until these gaps are addressed, the department’s ability to achieve important outcomes and benefits will continue to be limited. To help ensure that the department can better achieve business process reengineering and enterprise architecture outcomes and benefits, we recommend that the Secretary of Defense utilize the results of our portfolio manager survey to determine additional actions that can improve the department’s management of its business process reengineering and enterprise architecture activities. We received written comments on a draft of this report from DOD’s Deputy Chief Management Officer (DCMO). The comments are reprinted in appendix III. In the comments, the DCMO concurred with our recommendation and stated that the department will use the results of our portfolio manager survey to help make improvements. The DCMO also described associated improvement efforts. For example, the DCMO stated that the department plans to restructure the Business Enterprise Architecture to focus more explicitly on the business processes being executed within the functional domains, which span all levels of the department. DOD officials also provided technical comments, which we have incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees; the Director, Office of Management and Budget; the Secretary of Defense; and other interested parties. This report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions on matters discussed in this report, please contact me at (202) 512-4456 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objectives were to (1) assess the actions by the Department of Defense to comply with section 332 of the National Defense Authorization Act (NDAA) for Fiscal Year 2005, as amended and (2) determine the usefulness and effectiveness of DOD’s business enterprise architecture and business process reengineering processes. To address the first objective, we identified recommendations related to DOD’s business systems modernization efforts that we made in our annual reports from 2011 to 2014 (16 recommendations total) in response to the fiscal year 2005 NDAA’s requirements. Though we have made recommendations in this area prior to 2011, those recommendations have since been closed. We evaluated the department’s written responses and related documentation on steps completed to implement or partially implement the recommendations. Documentation we analyzed included guidance on business enterprise architecture and business process reengineering compliance, guidance on certifying and approving defense business systems, and documentation about the department’s problem statement development and review process. In addition, we interviewed officials from the Office of the Deputy Chief Management Officer and the Office of the Chief Information Officer, and observed a demonstration of the Office of the Deputy Chief Management Officer’s Integrated Business Framework-Data Alignment Portal tool to better understand the actions taken to address our recommendations. We also reviewed the department’s annual report to Congress, which was submitted on April 6, 2015, to identify gaps or inconsistencies with the implementation of the 16 recommendations. To address our second objective, we determined the intended outcomes of the business enterprise architecture and business process reengineering processes by analyzing the fiscal year 2005 NDAA. We also determined potential benefits associated with the processes by reviewing department guidance on the processes and related documentation. This includes DOD’s business enterprise architecture and business process reengineering guidance, Defense Business System Investment Management Process Guidance, the Business Case Analysis Template, DOD’s Business Enterprise Architecture 10.0 AV-1 Overview and Summary Information, the department’s Strategic Management Plan, and the Information Resource Management Strategic Plan for fiscal years 2014 and 2015. We also reviewed relevant GAO reports on business enterprise architecture and business process reengineering. We then developed a structured data collection instrument (survey) to gather information on the usefulness of the two specified IT modernization management controls at DOD in achieving their intended outcomes and their effectiveness in achieving associated benefits. As part of this survey, we also developed questions to help us determine (1) challenges related to complying with the processes and (2) suggestions for achieving business enterprise architecture and business process reengineering outcomes, including suggestions for achieving these outcomes in a more cost-effective manner. Selected questions contained a ratings scale for managers to choose a response that was consistent with the aforementioned topic areas. For example, we asked managers to rate the effectiveness of the business enterprise architecture and business process reengineering efforts using a scale containing the following choices: neither effective nor ineffective, not applicable/no basis to judge. We also asked managers to identify the extent to which their portfolios had achieved benefits associated with business enterprise architecture and business process reengineering efforts using a scale containing the following choices: little or no extent, or not applicable/no basis to judge. We pre-tested the questions with various DOD officials including officials from the Office of the Deputy Chief Management Officer, and with portfolio and program-level officials within the military departments. As a result, we determined that the military department portfolio managers were in the best position to answer our questions because they manage and have a perspective across an entire portfolio of defense business systems. Officials from DCMO’s Management, Policy, and Analysis Directorate provided us with a list of portfolio managers for the three military departments. We did not include portfolio managers for DOD entities outside of the military departments. We obtained responses from all surveyed portfolio managers (24 in total). Accordingly, these results are generalizable. We analyzed and summarized the survey results to help determine the usefulness and effectiveness of DOD’s business process reengineering and enterprise architecture efforts, as well as related challenges and suggestions for improvement. In addition, though we collected examples of cost savings estimates from managers, and cite them in the report, we did not assess the cited cost savings estimates. We also met with managers of selected DOD business system programs and other knowledgeable DOD officials to discuss their perspectives on DOD’s business enterprise architecture and business process reengineering efforts. This included interviewing officials associated with defense business programs from each of the military departments and from across various business functions, including program managers, enterprise architects, and other technical and program operations officials. Further, when available, we reviewed documentation provided by DOD program managers to substantiate answers provided as part of our interviews. We also discussed the survey results with officials from the Office of the DCMO to obtain their perspectives on the results and discussed with these officials ongoing efforts to improve the department’s business process reengineering and enterprise architecture efforts. The practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the survey data are analyzed can all introduce unwanted variability into survey results. To minimize such nonsampling errors, a social science survey specialist designed the questionnaire in collaboration with GAO staff with subject matter expertise. As stated earlier, the questionnaire was pre-tested to ensure that the questions were relevant, clearly stated, and easy to comprehend. When data from the survey were analyzed, an independent analyst reviewed the computer program used for the analysis of the survey data. Since this was a web-based survey, respondents entered their answers directly into the electronic questionnaire, thereby eliminating the need to have the data keyed into a database and avoiding data entry errors. We conducted this performance audit from October 2014 to July 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. Table 10 describes the status of open GAO recommendations associated with the fiscal year 2005 NDAA’s requirement that we annually assess the actions taken by the department to comply with its provisions. Since June 2011, we have made 16 recommendations to DOD regarding defense business systems. As of April 2015, the department had implemented 5, and partially implemented 11 recommendations. The table also identifies the category that we assigned to the recommendation to demonstrate its relationship to the requirements outlined in the act. In addition to the contact above, individuals making contributions to this report include Michael Holland (assistant director), Camille Chaires, Carl Barden, Susan Baker, Nabajyoti Barkakati, Wayne Emilien, Nancy Glover, James Houtz, Monica Perez-Nelson, Stuart Kaufman, Adam Vodraska, and Shawn Ward. | GAO designated DOD's multibillion dollar business systems modernization program as high risk in 1995, and since then has provided a series of recommendations aimed at strengthening its institutional approach to modernizing its business systems investments. Section 332 of the NDAA for fiscal year 2005, as amended, requires the department to take specific actions consistent with GAO's prior recommendations and included a provision for GAO to review DOD's efforts. In addition, the Senate Armed Services Committee Report for the NDAA for fiscal year 2015 included a provision for GAO to evaluate the usefulness and effectiveness of DOD's business enterprise architecture and business process reengineering processes. This report addresses both of those provisions. In evaluating the department's compliance, GAO analyzed DOD's efforts to address open recommendations made in previous reviews. To evaluate the usefulness and effectiveness of the department's business enterprise architecture and business process reengineering processes, GAO surveyed the military department portfolio managers (24 in total) and interviewed officials. The response rate for the survey was 100 percent, making the results of the survey generalizable. The Department of Defense (DOD) has implemented 5 of the 16 recommendations made by GAO since June 2011 to address each of the overarching provisions for improving business systems management in the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 , as amended (NDAA) (10 U.S.C. § 2222) (see table). For example, it has implemented the recommendation to improve the data available for its business systems by making improvements to its repositories used for tracking information about the systems. Based on GAO's analysis, the department has partially implemented the remaining 11 recommendations. Implementing all recommended actions will improve DOD's modernization management controls and help fulfill the department's execution of the act's requirements. Source: GAO analysis of DOD documentation. | GAO-15-627 . DOD's business enterprise architecture and process reengineering efforts are not fully achieving the intended outcomes described in statute. More specifically, portfolio managers reported through GAO's survey that the architecture was not effective in constraining system investments or enabling DOD to produce reliable and timely information for decision-making purposes, among other things. As a result, the architecture has produced limited value. Portfolio managers reported that the department's business process reengineering efforts were moderately effective in streamlining business processes, but much less so in limiting the tailoring of commercial off-the-shelf systems. They also reported that these efforts have been useful in realizing selected benefits, such as improved documentation of business needs. Managers GAO surveyed reported various challenges that impede the department's ability to fully achieve intended outcomes, such as cultural resistance to change and the lack of skilled staff. The department has work under way to address some of these challenges; however, gaps exist and the portfolio managers provided suggestions on how to close some of them. More fully addressing the challenges cited by the portfolio managers would help the department achieve better outcomes, including greater operational efficiencies and cost savings. GAO recommends that DOD utilize the results of the survey to determine additional actions that can improve management of its business process reengineering and enterprise architecture activities. DOD concurred with the recommendation. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The federal government plans to invest more than $89 billion on IT in fiscal year 2017. However, as we have previously reported, investments in federal IT too often result in failed projects that incur cost overruns and schedule slippages while contributing little to the mission-related outcome. For example: The Department of Defense’s Expeditionary Combat Support System was canceled in December 2012 after spending more than a billion dollars and failing to deploy within 5 years of initially obligating funds. The Department of Homeland Security’s Secure Border Initiative Network program was ended in January 2011, after the department obligated more than $1 billion to the program, because it did not meet cost-effectiveness and viability standards. The Department of Veterans Affairs’ Financial and Logistics Integrated Technology Enterprise program was intended to be delivered by 2014 at a total estimated cost of $609 million, but was terminated in October 2011 due to challenges in managing the program. The Office of Personnel Management’s Retirement Systems Modernization program was canceled in February 2011, after spending approximately $231 million on the agency’s third attempt to automate the processing of federal employee retirement claims. The tri-agency National Polar-orbiting Operational Environmental Satellite System was stopped in February 2010 by the White House’s Office of Science and Technology Policy after the program spent 16 years and almost $5 billion. The Department of Veterans Affairs’ Scheduling Replacement Project was terminated in September 2009 after spending an estimated $127 million over 9 years. These and other failed IT projects often suffered from a lack of disciplined and effective management, such as project planning, requirements definition, and program oversight and governance. In many instances, agencies had not consistently applied best practices that are critical to successfully acquiring IT investments. Federal IT projects have also failed due to a lack of oversight and governance. Executive-level governance and oversight across the government has often been ineffective, specifically from chief information officers (CIO). For example, we have reported that not all CIOs had the authority to review and approve the entire agency IT portfolio and that CIOs’ authority was limited. Recognizing the severity of issues related to government-wide management of IT, FITARA was enacted in December 2014. The law holds promise for improving agencies’ acquisition of IT and enabling Congress to monitor agencies’ progress and hold them accountable for reducing duplication and achieving cost savings. FITARA includes specific requirements related to seven areas. Federal data center consolidation initiative (FDCCI). Agencies are required to provide OMB with a data center inventory, a strategy for consolidating and optimizing the data centers (to include planned cost savings), and quarterly updates on progress made. The law also requires OMB to develop a goal for how much is to be saved through this initiative, and provide annual reports on cost savings achieved. Enhanced transparency and improved risk management. OMB and agencies are to make detailed information on federal IT investments publicly available, and agency CIOs are to categorize their IT investments by risk. Additionally, in the case of major IT investments rated as high risk for 4 consecutive quarters, the law requires that the agency CIO and the investment’s program manager conduct a review aimed at identifying and addressing the causes of the risk. Agency CIO authority enhancements. Agency CIOs are required to (1) approve the IT budget requests of their respective agencies, (2) certify that IT investments are adequately implementing OMB’s incremental development guidance, (3) review and approve contracts for IT, and (4) approve the appointment of other agency employees with the title of CIO. Portfolio review. Agencies are to annually review IT investment portfolios in order to, among other things, increase efficiency and effectiveness, and identify potential waste and duplication. In developing the associated process, the law requires OMB to develop standardized performance metrics, to include cost savings, and to submit quarterly reports to Congress on cost savings. Expansion of training and use of IT acquisition cadres. Agencies are to update their acquisition human capital plans to address supporting the timely and effective acquisition of IT. In doing so, the law calls for agencies to consider, among other things, establishing IT acquisition cadres or developing agreements with other agencies that have such cadres. Government-wide software purchasing program. The General Services Administration is to develop a strategic sourcing initiative to enhance government-wide acquisition and management of software. In doing so, the law requires that, to the maximum extent practicable, the General Services Administration should allow for the purchase of a software license agreement that is available for use by all Executive Branch agencies as a single user. Maximizing the benefit of the federal strategic sourcing initiative. Federal agencies are required to compare their purchases of services and supplies to what is offered under the Federal Strategic Sourcing initiative. OMB is also required to issue related regulations. In June 2015, OMB released guidance describing how agencies are to implement the law. OMB’s guidance states that it is intended to, among other things: assist agencies in aligning their IT resources to statutory establish government-wide IT management controls that will meet the law’s requirements, while providing agencies with flexibility to adapt to unique agency processes and requirements; clarify the CIO’s role and strengthen the relationship between agency CIOs and bureau CIOs; and strengthen CIO accountability for IT cost, schedule, performance, and security. The guidance includes several actions agencies are to take to establish a basic set of roles and responsibilities (referred to as the “common baseline”) for CIOs and other senior agency officials that are needed to implement the authorities described in the law. For example, agencies were required to conduct a self-assessment and submit a plan describing the changes they will make to ensure that common baseline responsibilities are implemented. Agencies were to submit their plans to OMB’s Office of E-Government and Information Technology by August 15, 2015, and make portions of the plans publicly available on agency websites no later than 30 days after OMB approval. As of May 2016, 22 of the 24 Chief Financial Officers Act agencies had made their plans publicly available. In addition, OMB recently released proposed guidance for public comment on the optimization of federal data centers and implementation of FITARA’s data center consolidation and optimization provisions. Among other things, the proposed guidance instructs agencies to maintain complete inventories of all data center facilities owned, operated, or maintained by or on behalf of the agency; develop cost savings targets due to consolidation and optimization for fiscal years 2016 through 2018 and report any actual realized cost savings; and measure progress toward defined performance metrics (including server utilization) on a quarterly basis as part of their data center inventory submissions. The proposed guidance also directs agencies to develop a data center consolidation and optimization strategic plan that defines the agency’s data center strategy for the subsequent 3 years. This strategy is to include a timeline for agency consolidation and optimization activities with an emphasis on cost savings and optimization performance benchmarks the agency can achieve between fiscal years 2016 and 2018. Finally, the proposed guidance indicates that OMB will maintain a public dashboard that will display consolidation-related costs savings and optimization performance information for the agencies. In February 2015, we introduced a new government-wide high-risk area, Improving the Management of IT Acquisitions and Operations. This area highlights several critical IT initiatives in need of additional congressional oversight, including reviews of troubled projects, an emphasis on incremental development, a key transparency website, reviews of agencies’ operational investments, data center consolidation, and efforts to streamline agencies’ portfolios of IT investments. We noted that implementation of these initiatives has been inconsistent and more work remains to demonstrate progress in achieving IT acquisition outcomes. Further, in our February 2015 high-risk report, we identified actions that OMB and the agencies need to take to make progress in this area. These include implementing FITARA, as well as implementing our previous recommendations, such as developing comprehensive inventories of federal agencies’ software licenses. As noted in that report, we have made multiple recommendations to improve agencies’ management of IT acquisitions and operations, many of which are discussed later in this statement. Between fiscal years 2010 and 2015, we made approximately 800 such recommendations to OMB and federal agencies. As of May 2016, about 33 percent of these recommendations had been implemented. Also in our high risk report, we stated that OMB and agencies will need to demonstrate measurable government-wide progress in the following key areas: implement at least 80 percent of GAO’s recommendations related to the management of IT acquisitions and operations within 4 years, ensure that a minimum of 80 percent of the government’s major acquisitions deliver functionality every 12 months, and achieve no less than 80 percent of the planned PortfolioStat savings and 80 percent of the planned savings for data center consolidation. One of the key initiatives to implement FITARA is data center consolidation. OMB established FDCCI in February 2010 to improve the efficiency, performance, and environmental footprint of federal data center activities. In a series of reports over the past 5 years, we determined that while data center consolidation could potentially save the federal government billions of dollars, weaknesses existed in several areas including agencies’ data center consolidation plans and OMB’s tracking and reporting on cost savings. In total, we have made 111 recommendations to OMB and agencies to improve the execution and oversight of the initiative. Most agencies agreed with our recommendations or had no comment. Most recently, in March 2016, we reported that the 24 departments and agencies participating in FDCCI collectively made progress on their data center closure efforts. Specifically, as of November 2015, agencies had identified a total of 10,584 data centers, of which they reported closing 3,125 through fiscal year 2015. Notably, the Departments of Agriculture, Defense, the Interior, and the Treasury accounted for 84 percent of these total closures. Agencies are also planning to close an additional 2,078 data centers—for a total of 5,203—by the end of fiscal year 2019. See figure 1 for a summary of agencies’ total data centers and reported and planned closures. In addition, we reported that 19 of the 24 agencies reported achieving an estimated $2.8 billion in cost savings and avoidances from their data center consolidation and optimization efforts from fiscal years 2011 to 2015. Notably, the Departments of Commerce, Defense, Homeland Security, and the Treasury accounted for about $2.4 billion (or about 86 percent) of the total. Further, 21 agencies collectively reported planning an additional $5.4 billion in cost savings and avoidances, for a total of approximately $8.2 billion, through fiscal year 2019. See figure 2 for a summary of agencies’ reported achieved and planned cost savings and avoidances from fiscal years 2011 through 2019. However, we noted that planned savings may be higher because 10 of the 21 agencies that reported planned closures from fiscal years 2016 through 2018 have not fully developed their cost savings and avoidance goals for these fiscal years. Agencies provided varied reasons for not having this information, including that they were in the process of re- evaluating their data center consolidation strategies, as well as facing other challenges in determining such information. We noted that the reporting of planned savings goals is increasingly important considering the enactment of FITARA, which requires agencies to develop yearly calculations of cost savings as part of their multi-year strategies to consolidate and optimize their data centers. We concluded that, until agencies address their challenges and complete and report such information, the $8.2 billion in total savings and avoidances may be understated and agencies will not be able to satisfy the data center consolidation strategy provisions of FITARA. Finally, we reported that agencies made limited progress against OMB’s fiscal year 2015 core data center optimization performance metrics. In total, 22 of the 24 agencies reported data center optimization information to OMB. However, of the nine metrics with targets, only one—full-time equivalent ratio (a measure of data center labor efficiency)—was met by half of the 24 agencies, while the remaining eight were each met by less than half of the agencies. See figure 3 for a summary of agencies’ progress against OMB’s data center optimization metric targets. Agencies reported a variety of challenges in meeting OMB’s data center optimization targets, such as the decentralized nature of their agencies making consolidation and optimization efforts more difficult. We noted that addressing this challenge and others is increasingly important in light of the enactment of FITARA, which requires agencies to measure and report progress in meeting data center optimization performance metrics. We concluded that, until agencies take action to improve progress against OMB’s data center optimization metrics, including addressing any challenges identified, they could be hindered in the implementation of the data center consolidation provisions of FITARA and in making initiative- wide progress against OMB’s optimization targets. To better ensure that federal data center consolidation and optimization efforts improve governmental efficiency and achieve cost savings, we recommended that 10 agencies take action to complete their planned data center cost savings and avoidance targets for fiscal years 2016 through 2018. We also recommended that 22 agencies take action to improve optimization progress, including addressing any identified challenges. Fourteen agencies agreed with our recommendations, 4 did not state whether they agreed or disagreed, and 6 stated that they had no comments. To facilitate transparency across the government in acquiring and managing IT investments, OMB established a public website—the IT Dashboard—to provide detailed information on major investments at 26 agencies, including ratings of their performance against cost and schedule targets. Among other things, agencies are to submit ratings from their CIOs, which, according to OMB’s instructions, should reflect the level of risk facing an investment relative to that investment’s ability to accomplish its goals. In this regard, FITARA includes a requirement for CIO’s to categorize their major IT investment risks in accordance with OMB guidance. Over the past 6 years, we have issued a series of reports about the IT Dashboard that noted both significant steps OMB has taken to enhance the oversight, transparency, and accountability of federal IT investments by creating its IT Dashboard, as well as issues with the accuracy and reliability of data. In total, we have made 22 recommendations to OMB and federal agencies to help improve the accuracy and reliability of the information on the IT Dashboard and to increase its availability. Most agencies agreed with our recommendations or had no comment. Most recently, as part of our ongoing work, we determined that agencies had not fully considered risks when rating their major investments on the IT Dashboard. Specifically, our assessment of 95 investments at 15 agencies matched the CIO ratings posted on the Dashboard 22 times, showed more risk 60 times, and showed less risk 13 times. Figure 4 summarizes how our assessments compared to the select investments’ CIO ratings. Aside from the inherently judgmental nature of risk ratings, we identified three factors which contributed to differences between our assessments and CIO ratings: Forty-one of the 95 CIO ratings were not updated during the month we reviewed, which led to more differences between our assessments and the CIOs’ ratings. This underscores the importance of frequent rating updates, which help to ensure that the information on the Dashboard is timely and accurately reflects recent changes to investment status. Three agencies’ rating processes span longer than 1 month. Longer processes mean that CIO ratings are based upon older data, and may not reflect the current level of investment risk. Seven agencies’ rating processes did not focus on active risks. According to OMB’s guidance, CIO ratings should reflect the CIO’s assessment of the risk and the investment’s ability to accomplish its goals. CIO ratings that do no incorporate active risks increase the chance that ratings overstate the likelihood of investment success. As a result, we concluded that the associated risk rating processes used by the agencies were generally understating the level of an investment’s risk, raising the likelihood that critical federal investments in IT are not receiving the appropriate levels of oversight. To better ensure that the Dashboard ratings more accurately reflect risk, we are recommending in our draft report, which is with the applicable agencies for comment, that 15 agencies take actions to improve the quality and frequency of their CIO ratings. OMB has emphasized the need to deliver investments in smaller parts, or increments, in order to reduce risk, deliver capabilities more quickly, and facilitate the adoption of emerging technologies. In 2010, it called for agencies’ major investments to deliver functionality every 12 months and, since 2012, every 6 months. Subsequently, FITARA codified a requirement that agency CIO’s certify that IT investments are adequately implementing OMB’s incremental development guidance. In May 2014, we reported that almost three-quarters of selected investments at five major agencies did not plan to deliver capabilities in 6-month cycles, and less than half planned to deliver functionality in 12- month cycles. We also reported that most of the five agencies reviewed had incomplete incremental development policies. Accordingly, we recommended that OMB develop and issue clearer guidance on incremental development and that selected agencies update and implement their associated policies. Most agencies agreed with our recommendations or had no comment. More recently, as part of our ongoing work, we determined that agencies had not fully implemented incremental development practices for their software development projects. Specifically, as of August 31, 2015, on the IT Dashboard, 22 federal agencies reported that 300 of 469 active software development projects (approximately 64 percent) were planning to deliver usable functionality every 6 months for fiscal year 2016, as required by OMB guidance. Regarding the remaining 169 projects (or 36 percent) that were reported as not planning to deliver functionality every 6 months, agencies provided a variety of explanations for not achieving that goal, including project complexity, the lack of an established project release schedule, or that the project was not a software development project. Table 1 lists the total number and percent of software development projects that agencies reported plans to deliver functionality, from highest to lowest. In reviewing seven selected agencies’ software development projects, we determined that the percentage delivering functionality every 6 months was reported at 45 percent for fiscal year 2015 and planned for 54 percent in fiscal year 2016. However, significant differences existed between the delivery rates that the agencies reported to us and what they reported on the IT Dashboard. For example, the percentage of software projects delivering every 6 months that was reported to us by the Department of Commerce decreased by about 42 percentage points from what was reported on the IT Dashboard. In contrast, the Department of Defense reported a 55 percentage point increase from what was reported on the IT Dashboard. Figure 5 compares what the seven agencies reported on the IT Dashboard and the numbers they reported to us. We determined that the significant differences in delivery rates were due, in part, to agencies having different interpretations of OMB’s guidance on reporting software development projects and because the information reported to us was generally more current than the information reported on the IT Dashboard. We concluded that, until the inconsistences in the information reported to us versus the information provided on the IT Dashboard are addressed, the seven agencies we reviewed are at risk that OMB and key stakeholders may make decisions regarding agency investments without the most current and accurate information. Finally, nearly all of the seven agencies we reviewed had not yet implemented the FITARA requirement related to certifying that major IT investments are adequately implementing OMB’s incremental development guidance. Specifically, only one agency—the Department of Homeland Security—had processes and policies to ensure that the CIO will certify that major IT investments are adequately implementing incremental development, while the remaining six agencies had not established such processes and policies. Officials from most of these six agencies reported they were in the process of updating their existing incremental development policies to address certification. To improve the use of incremental development, we are recommending in our draft report, which is with the applicable agencies for comment, that agencies take action to update their policies for incremental development and IT Dashboard project information. We are also recommending that OMB provide clarifying guidance on what IT investments are required to use incremental development and for reporting on projects that are not subject to these requirements. In summary, with the recent enactment of FITARA, the federal government has an opportunity to improve the transparency and management of IT acquisition and operations, and strengthen the authority of CIOs to provide needed direction and oversight. However, improvements are needed in several critical IT initiatives, including data center consolidation, efforts to increase transparency via OMB’s IT Dashboard, and incremental development—all of which are related to provisions of FITARA. Accordingly, OMB and federal agencies should expeditiously implement the requirements of the new IT reform law and continue to implement our previous recommendations. To help ensure that these improvements are achieved, continued congressional oversight of OMB’s and agencies’ implementation efforts is essential. Chairmen Meadows and Hurd, Ranking Members Connolly and Kelly, and Members of the Subcommittees, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staffs have any questions about this testimony, please contact me at (202) 512-9286 or at [email protected]. Individuals who made key contributions to this testimony are Dave Hinchman (Assistant Director), Justin Booth, Chris Businsky, Rebecca Eyler, Linda Kochersberger, and Jon Ticehurst. Data Center Consolidation: Agencies Making Progress, but Planned Savings Goals Need to Be Established . GAO-16-323. Washington, D.C.: March 3, 2016. High-Risk Series: An Update. GAO-15-290. Washington, D.C.: February 11, 2015. Data Center Consolidation: Reporting Can Be Improved to Reflect Substantial Planned Savings. GAO-14-713. Washington, D.C.: September 25, 2014. Information Technology: Agencies Need to Establish and Implement Incremental Development Policies. GAO-14-361. Washington, D.C.: May 1, 2014. IT Dashboard: Agencies Are Managing Investment Risk, but Related Ratings Need to Be More Accurate and Available. GAO-14-64 Washington, D.C.: December 12, 2013. Data Center Consolidation: Strengthened Oversight Needed to Achieve Cost Savings Goal. GAO-13-378. Washington, D.C.: April 23, 2013. Information Technology Dashboard: Opportunities Exist to Improve Transparency and Oversight of Investment Risk at Select Agencies. GAO-13-98. Washington, D.C.: October 16, 2012. Data Center Consolidation: Agencies Making Progress on Efforts, but Inventories and Plans Need to Be Completed. GAO-12-742. Washington, D.C.: July 19, 2012. IT Dashboard: Accuracy Has Improved, and Additional Efforts Are Under Way to Better Inform Decision Making. GAO-12-210. Washington, D.C.: November 7, 2011. Data Center Consolidation: Agencies Need to Complete Inventories and Plans to Achieve Expected Savings. GAO-11-565. Washington, D.C.: July 19, 2011. Federal Chief Information Officers: Opportunities Exist to Improve Role in Information Technology Management. GAO-11-634. Washington, D.C.: September 15, 2011. Information Technology: OMB Has Made Improvements to Its Dashboard, but Further Work Is Needed by Agencies and OMB to Ensure Data Accuracy. GAO-11-262. Washington, D.C.: March 15, 2011. Information Technology: OMB’s Dashboard Has Increased Transparency and Oversight, but Improvements Needed. GAO-10-701. Washington, D.C.: July 16, 2010. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The federal government plans to invest more than $89 billion on IT in fiscal year 2017. Historically, these investments have frequently failed, incurred cost overruns and schedule slippages, or contributed little to mission-related outcomes. Accordingly, in December 2014, IT reform legislation was enacted into law, aimed at improving agencies' acquisition of IT. Further, in February 2015, GAO added improving the management of IT acquisitions and operations to its high-risk list—a list of agencies and program areas that are high risk due to their vulnerabilities to fraud, waste, abuse, and mismanagement, or are most in need of transformation. Between fiscal years 2010 and 2015, GAO made about 800 recommendations related to this high-risk area to OMB and agencies. As of May 2016, about 33 percent of these had been implemented. This statement primarily summarizes: (1) GAO's published work on data center consolidation, and (2) GAO's draft reports on the risk of major investments as reported on the IT Dashboard and the implementation of incremental development practices. These draft reports with recommendations are currently with applicable agencies for comment. The Office of Management and Budget (OMB) and agencies have taken steps to improve federal information technology (IT) through a series of initiatives; however, additional actions are needed. Consolidating data centers. In an effort to reduce the growing number of data centers, OMB launched a consolidation initiative in 2010. GAO recently reported that agencies had closed 3,125 of the 10,584 total data centers and achieved $2.8 billion in cost savings and avoidances through fiscal year 2015. Agencies are planning a total of about $8.2 billion in savings and avoidances through fiscal year 2019. However, these planned savings may be higher because 10 agencies had not fully developed their planned savings goals. In addition, agencies made limited progress against OMB's fiscal year 2015 data center optimization performance targets, such as the utilization of data center facilities. GAO recommended that the agencies take action to complete their cost savings targets and improve optimization progress. Most agencies agreed with the recommendations or had no comment. Enhancing transparency. OMB's IT Dashboard provides detailed information on major investments at federal agencies, including ratings from Chief Information Officers (CIO) that should reflect the level of risk facing an investment. In a draft report, GAO's assessments of the risk ratings showed more risk than the associated CIO ratings. In particular, of the 95 investments reviewed, GAO's assessments matched the CIO ratings 22 times, showed more risk 60 times, and showed less risk 13 times. Several issues contributed to these differences, such as ratings not being updated frequently. In its draft report, GAO is recommending that agencies improve the quality and frequency of their CIO ratings. Implementing incremental development. An additional key reform initiated by OMB has emphasized the need to deliver investments in smaller parts, or increments, in order to reduce risk and deliver capabilities more quickly. Since 2012, OMB has required investments to deliver functionality every 6 months. In a draft report, GAO determined that 22 agencies reported that 64 percent of 469 active software development projects had plans to deliver usable functionality every 6 months for fiscal year 2016. Further, for seven selected agencies, GAO identified significant differences in the percentage of software projects delivering every 6 months reported to GAO compared to what was reported on the IT Dashboard. For example, the percentage of software projects reported to GAO by the Department of Commerce decreased by about 42 percentage points from what was reported on the IT Dashboard. These differences were due, in part, to agencies having different interpretations of OMB's guidance on reporting software development projects. In its draft report, GAO is recommending that OMB and agencies improve the use of incremental development. GAO has previously made numerous recommendations to OMB and federal agencies to improve the oversight and execution of the data center consolidation initiative, the accuracy and reliability of the IT Dashboard, and incremental development policies. Most agencies agreed with GAO's recommendations or had no comment. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Afghanistan is a mountainous, arid, land-locked Central Asian country with limited natural resources. At 647,500 square kilometers, it is slightly smaller than the state of Texas. Afghanistan is bordered by Pakistan to the east and south; Tajikistan, Turkmenistan, Uzbekistan, and China to the north; and Iran to the west (see fig. 1). Its population, currently estimated at 26 million, is ethnically diverse, largely rural, and mostly uneducated. Life expectancy in Afghanistan is among the lowest in the world, with some of the highest rates of infant and child mortality. Political conflicts have ravaged Afghanistan for years, limiting development within the country. Conflict broke out in 1978 when a communist-backed coup led to a change in government. One year later, the Soviet Union began its occupation of Afghanistan, initiating more than two decades of conflict. Over the course of the 10-year occupation, various countries, including the United States, backed Afghan resistance efforts. The protracted conflict led to the flight of a large number of refugees into Pakistan and Iran. In 1989, the Soviet forces withdrew, and in 1992, the communist regime fell to the Afghan resistance. Unrest continued, however, fueled by factions and warlords fighting for control. The Taliban movement emerged in the mid 1990s, and by 1998 it controlled approximately 90 percent of the country. Although it provided some political stability, the Taliban regime did not make significant improvements to the country’s food security. Furthermore, the Taliban’s continuing war with the Northern Alliance and the Taliban’s destructive policies, highlighted in its treatment of women, further impeded aid and development. Coalition forces removed the regime in late 2001, responding to its protection of al Qaeda terrorists who attacked the United States. In December 2001, an international summit in Bonn, Germany, established a framework for the new Afghan government, known as the Bonn Agreement. Agriculture is essential to Afghanistan. Despite the fact that only 11.5 percent (7.5 million hectares) of Afghanistan’s total area is cultivable, 85 percent of the population depends on agriculture for its livelihood, and 80 percent of export earnings and more than 50 percent of the gross domestic product have historically come from agriculture. However, Afghanistan’s agricultural sector continues to suffer from the effects of prolonged drought, war, and neglect. It lacks high-quality seed, draft animals, and fertilizer, as well as adequate veterinary services, modern technology, advanced farming methods, and a credit system for farmers. Further, Afghanistan’s Ministry of Agriculture and Animal Husbandry and its Ministry of Irrigation and Water Resources lack the infrastructure and resources to assist farmers. Because Afghanistan experiences limited rainfall, its agricultural sector is highly dependent on irrigation—85 percent of its agricultural products derives from irrigated areas. Thus, the conservation and efficient use of water is the foundation of the agricultural sector. The severe drought that has gripped the country since 1998 has resulted in drastic decreases in domestic production of livestock and agricultural supplies including seed, fertilizer, and feed (see fig. 2). Several earthquakes and the worst locust infestation in 30 years exacerbated this crisis in 2002. Without adequate supplies and repairs to irrigation systems, even if the drought breaks, farmers will be unable to produce the food that the country needs to feed itself. Since 1965, the WFP has been the major provider of food assistance to Afghanistan. Partnering with nongovernmental organizations, it delivers assistance through emergency operations that provide short-term relief to populations affected by a specific crisis such as war or drought. It also conducts protracted relief and recovery operations designed to shift assistance toward longer-term reconstruction efforts. Because of its policy to target assistance at specific populations, WFP does not attempt to provide food for all of the vulnerable people within a country or affected area. Instead, it focuses on specific vulnerable populations such as internally displaced people or widows (see fig. 3). Further, it does not try to meet all of the daily requirements of the targeted populations. WFP’s 2002 emergency operation in Afghanistan targeted internally displaced people, people affected by drought, and children, among others. The assistance programs designed to assist these populations provide between 46 and 79 percent, or 970 to 1671 kilocalories, of the recommended minimum daily requirement of 2100 kilocalories. WFP assumes that beneficiaries will obtain the remainder of their food through subsistence farming or the market. FAO has provided much of the agricultural assistance to Afghanistan. FAO has been involved in agricultural development and natural resource management in Afghanistan for more than 50 years. FAO was founded in 1945 with a mandate to raise levels of nutrition and standards of living, to improve agricultural productivity, and to better the condition of rural populations. Today, FAO is one of the largest specialized agencies in the UN system and the lead agency for agriculture, forestry, fisheries, and rural development. An intergovernmental organization, FAO has 183 member countries plus one member organization, the European Community. FAO has traditionally carried out reconstruction efforts in relatively stable environments. Although FAO is increasingly implementing its programs in unstable postconflict situations such as Afghanistan, the agency and its staff are still adjusting to operating in such environments. FAO's regular program budget provides funding for the organization's normative work and, to a limited extent, for advice to member states on policy and planning in the agricultural sector. FAO's regular budget can also fund limited technical assistance projects through its Technical Cooperation Program. Apart from this, extrabudgetary resources, through trust funds provided by donors or other funding arrangements, fund all emergency and development assistance provided by FAO. Thus, extrabudgetary resources fund FAO’s field program, the major part of its assistance to member countries. The emergency food assistance provided to Afghanistan by the United States and the international community from January 1999 through December 2002 benefited millions and was well managed, but donor support was inadequate. WFP delivered food to millions of people in each of the 4 years, helping avert widespread famine. In addition, WFP managed the distribution of U.S. and international food assistance effectively, overcoming significant obstacles and using its logistics system and a variety of monitoring mechanisms to ensure that food reached the intended beneficiaries. However, inadequate and untimely donor support in 2002 disrupted some WFP assistance efforts and could cause further disruptions in 2003. Further, WFP could have provided assistance to an additional 685,000 people and reduced its delivery times if the United States had donated cash or regionally purchased commodities instead of shipping U.S.-produced commodities. Additionally, if the United States had donated the $50.9 million that it spent on approximately 2.5 million daily rations air- dropped by the Department of Defense, WFP could have purchased enough regionally produced commodities to provide food assistance for an estimated 1.0 million people for a year. The emergency food assistance that the United States and other bilateral donors provided in Afghanistan through WFP from 1999 through 2002 met a portion of the food needs of millions of vulnerable Afghans. Over the 4-year period, WFP delivered approximately 1.6 million metric tons of food that helped avert famine and stabilize the Afghan people, both in Afghanistan and in refugee camps in neighboring countries. The food assistance also furthered the country’s reconstruction through projects, among others, that exchanged food for work. WFP delivered the assistance as part of seven protracted relief–recovery and emergency operations (see table 1). The types of operations and their duration and objectives varied in response to changing conditions within Afghanistan. These objectives included, but were not limited to, providing relief to the most severely affected populations in Afghanistan and Afghan refugees in neighboring countries and preventing mass movements of populations. WFP implemented a number of different types of food assistance projects, including free food distribution; institutional feeding programs; bakeries; food-for-work, -seed, -education, -training, and -asset-creation projects; and projects targeted at refugees, internally displaced people, and civil servants. (See app. II for a list and description of WFP’s projects.) Food-for- work and food-for-asset-creation projects provided essential food assistance to the most vulnerable members of Afghanistan’s population while enabling the beneficiaries to help rehabilitate local infrastructure and rebuild productive assets such as roads and schools. Between July and September 2002, these projects employed 1 million laborers per month, paying them in food commodities. U.S. food assistance to Afghanistan, provided by USAID and USDA, accounted for approximately 68 percent of the cash contributions and 67 percent of the commodities delivered by WFP from 1999 through 2002 (see table 1). The U.S. provides cash to WFP to cover transportation and administrative costs associated with its in-kind contributions of commodities. USAID’s authority to donate to WFP operations derives from Title II of the Agricultural Trade Development and Assistance Act of 1954 (P.L. 480). Title II authorizes the agency to donate agricultural commodities to meet international emergency relief requirements and carry out nonemergency feeding programs overseas. USDA also provides surplus commodities to WFP under section 416(b) of the Agricultural Act of 1949. U.S. contributions consisted of in-kind donations of commodities such as white wheat and cash donations to cover the cost of transporting the commodities from the United States to Afghanistan. WFP managed the distribution of U.S. and international food assistance to Afghanistan effectively despite significant obstacles, including harsh weather and a lack of infrastructure to deliver food to beneficiaries. To accomplish this, WFP appointed a special envoy to direct operations and employed a dedicated staff of local nationals. It also used various monitoring and reporting mechanisms to track the delivery of food. In distributing the food assistance, WFP faced significant obstacles related to political and security disturbances in Afghanistan as well as physical and environmental conditions. These obstacles included limited mobility due to continued fighting between the Taliban and the Northern Alliance and coalition forces; religious edicts issued by the Taliban limiting the employment of women by international organizations; difficult transport routes created by geography, climate, and lack of infrastructure (see fig. 4); and attempts by Afghan trucking cartels to dramatically increase trucking fees. To overcome these obstacles, WFP negotiated with the Taliban to allow the movement of food to areas occupied by the Northern Alliance; it also threatened to cancel certain projects unless women were allowed to continue to work for WFP. Further, WFP found ways to deliver food to remote areas, including airlifting food and hiring donkeys (see fig. 5). In addition, it purchased trucks to supplement a fleet of contracted trucks. Using these trucks as leverage against the Afghan trucking cartel, WFP forced the cartel to negotiate when the cartel attempted to dramatically increase transport fees. WFP created the position of Special Envoy of the Executive Director for the Afghan Region to lead and direct all WFP operations in Afghanistan and neighboring countries during the winter of 2001–2002, when it was believed that the combination of winter weather and conflict would increase the need for food assistance. WFP was thus able to consolidate the control of all resources in the region, streamline its operations, and accelerate the movement of assistance. WFP points to the creation of the position as one of the main reasons it was able to move record amounts of food into Afghanistan from November 2001 through January 2002. In December 2001 alone, WFP delivered 116,000 metric tons of food, the single largest monthly food delivery within a complex emergency operation in WFP’s history. WFP also credits its quick response to its national staff and the Afghan truck drivers it contracted. WFP employed approximately 400 full-time national staff during 1999–2002. These staff established and operated an extensive logistics system and continued operations throughout Afghanistan, including areas that international staff could not reach owing to security concerns, and during periods when international staff were evacuated from the country. The truckers who moved the food around the country continued working even during the harshest weather and in areas that were unsafe because of ongoing fighting and banditry. WFP uses a number of real-time monitoring mechanisms to track the distribution of commodities in Afghanistan, and the data we reviewed suggested that food distributions have been effective and losses minimal. (For a description of WFP’s monitoring procedures, see app. III.) During our visits to project and warehouse sites in Afghanistan, we observed orderly and efficient storage, handling, and distribution of food assistance. WFP’s internal auditor reviewed WFP Afghanistan’s monitoring operations in August of 2002 and found no material weaknesses. USAID has also conducted periodic monitoring of WFP activities without finding any major flaws in WFP’s operations. In addition, most of the implementing partners we contacted were familiar with WFP reporting requirements. However, 10 of the 14 implementing partners we contacted commented unfavorably on WFP’s project monitoring efforts, stating that monitoring visits were too infrequent. Finally, WFP’s loss reporting data indicated that only 0.4 percent of the commodities was lost owing to theft, spoilage, mishandling, or other causes. Inadequate and untimely donor support disrupted WFP’s food assistance efforts in 2002 and could disrupt efforts in 2003; in addition, U.S. assistance to Afghanistan, both through WFP and the Department of Defense, was costly. In 2002, interruptions in support forced WFP to delay payments of food, curtail the implementation of new projects, and reduce the level of rations provided to repatriating refugees. WFP expressed concern that donor support in 2003 may be similarly affected, as a growing number of international emergencies and budgetary constraints could reduce the total funding available for food assistance to Afghanistan. In addition, WFP could have delivered more food and reduced delivery times if the United States had provided either cash or regionally purchased commodities instead of shipping U.S.-produced commodities and airdropping humanitarian daily rations. Obtaining donor support for the emergency food assistance operation for the April 2002 through December 2002 period was difficult owing to the donor community’s inadequate response to WFP’s appeal for contributions. WFP made its initial appeal in February 2002 for the operation and it made subsequent appeals for donor support throughout the operation. The operation was designed to benefit 9,885,000 Afghans over a 9-month period, through the provision of 543,837 metric tons of food at a cost of over $295 million. It was also intended to allow WFP to begin to shift from emergency to recovery operations with particular emphasis on education, health, and the agricultural sector. When the operation began in April 2002, WFP’s Kabul office warned that it might have to stop or slow projects if donors did not provide more support. At that time, WFP had received only $63.9 million, or 22 percent of the required resources. The United States provided most of this funding. (See app. IV for a list of donors and their contributions for the operation.) From April through June—the preharvest period when Afghan food supplies are traditionally at their lowest point— WFP was able to meet only 51 percent of the planned requirement for assistance. WFP’s actual deliveries were, on average, 33 percent below actual requirements for the 10-month period April 2002–January 2003. Figure 6 illustrates the gaps in the operation’s resources for the 10-month period. Lack of timely donor contributions and an increase in the number of returning refugees forced WFP and its implementing partner, the UN High Commissioner for Refugees, to reduce from 150 to 50 kilograms the rations provided to help returning refugees and internally displaced persons reestablish themselves in their places of origin. The rations are intended to enable these groups to sustain themselves long enough to reestablish their lives; reducing the rations may have compromised efforts to stabilize population movements within Afghanistan. The lack of donor support also forced WFP and its implementing partners to delay for up to 10 weeks, in some cases, the compensation promised to Afghans who participated in the food-for-work and food-for-asset-creation projects, resulting in a loss of credibility in the eyes of the Afghans and nongovernmental organizations. Similarly, because of resource shortages, WFP had to delay for up to 8 weeks in-kind payments of food in its civil service support program, intended to help the new government establish itself, and it never received enough contributions to provide civil servants with the allocation of tea they were to be given as part of their support package. In addition, WFP was forced to reduce the number of new projects it initiated, thus limiting the level of reconstruction efforts it completed. In January 2003, WFP expressed concern that the problems it encountered with donor support in 2002 could recur in 2003. Despite the expansion of agricultural production in 2002 because of increased rainfall, 6 million Afghans will require food assistance in 2003. Although the United States was the largest donor of food assistance to Afghanistan in 2002, the U.S. contribution may be smaller in 2003 than in previous years owing to reduced surpluses of commodities, higher commodity prices, and competing crises in Africa, North Korea, and Iraq. The UN forecasts Afghan cereal production for July 2002 through June 2003 at 3.59 million metric tons, a cereal import requirement of 1.38 million metric tons, and Afghan commercial food imports at 911,000 metric tons. Thus, an estimated total deficit of 469,000 metric tons remains to be covered in the 12-month period by international food assistance. The U.S.-produced commodities and humanitarian daily rations provided by the United States to Afghanistan resulted in lower volumes of food than if the United States had provided regionally purchased commodities or cash donations. If it had provided WFP with cash or commodities from countries in the Central Asia region, the United States could have eliminated ocean freight costs. We estimated that the savings in freight costs would have enabled WFP to provide food assistance to approximately 685,000 additional people for 1 year. In addition, we estimated that if the United States had donated cash or regionally purchased commodities instead of air-dropping rations, WFP could have provided food assistance for another 1.0 million people for a year. U.S.-Produced Commodities Raised Costs and Slowed Delivery Most of the food assistance that the United States donated to Afghanistan in 1999–2002 was provided through WFP as in-kind donations of U.S. agricultural products as well as cash to cover shipping and freight costs. Since the commodities were purchased in the United States, much of the cost of the assistance represented shipping and freight costs rather than the price of the commodities. Figure 7 provides a breakdown of the costs associated with U.S. food assistance to Afghanistan from 1999 through 2002. (See app. V for additional cost data.) We estimated that if the United States had provided cash or regionally purchased commodities instead of U.S.-produced commodities in 2002, WFP could have purchased approximately 103,000 additional metric tons of commodities and saved 120 days in delivery time. WFP officials in Rome and Cairo stated that cash was greatly preferable to in-kind donations because it allows for flexibility and for local and regional purchases. Other contributors to WFP efforts in Afghanistan have provided cash, allowing WFP to make the purchases it deemed most expedient, including purchases from Central Asian countries that produced large surpluses in 2002. Ninety-three percent of the commodities WFP purchased for the emergency operation that began in April 2002 (157,128 metric tons) were from Kazakhstan and Pakistan. WFP also stated that it could have saved approximately 120 days in delivery time if it had received U.S. contributions in cash that it could have used for regional purchases. Although the commodity costs and some of the freight costs for regional purchases are lower, the largest portion of the savings from regional purchases comes from eliminating ocean freight costs. In 2002, USDA spent $5.6 million on ocean freight, or 31 percent of the value of the aid it provided to Afghanistan. USAID spent $29.4 million on ocean freight, or 18.3 percent of the value of the aid it provided to Afghanistan. Overall, USDA and USAID spent approximately $35.0 million on ocean freight and commissions, or 19.6 percent of the total value ($178,068,786) of the food aid they provided through WFP to Afghanistan. Had this money been spent on regional purchases instead of on ocean freight, it could have paid for 103,000 additional metric tons of commodities, or enough to provide food assistance for approximately 685,000 people for 1 year. However, the laws governing the main food assistance programs under which most of the U.S. assistance was provided to Afghanistan through WFP do not provide for USAID and USDA to purchase food assistance commodities regionally or provide cash to WFP to make regional purchases. All of the assistance must be provided in the form of U.S. commodities, and 75 percent of the commodities by weight must be shipped on U.S.-flag vessels. According to USDA, this requirement referred to as “cargo preference” accounts for 9 percent of the cost of U.S. food assistance shipments worldwide. In this case, it accounted for approximately $16 million of the $35 million in ocean freight. In prior reports we reported that the most significant impact of the cargo preference requirement on U.S. food assistance programs is the additional costs incurred. Using U.S.-flag vessels reduces funds available for purchasing commodities, thus the amount of food delivered to vulnerable populations is decreased. In its 2002 annual assessment of management performance, the Office of Management and Budget concluded that U.S. food assistance programs would be more cost effective and flexible if the requirement to ship U.S. food assistance on U.S.-flag vessels was eliminated. In commenting on a draft of this report, USDA stated that consideration should be given to waiving cargo preference requirements in specific food aid situations. In February 2003, the President announced a new humanitarian $200 million Famine Fund. Use of the fund will be subject to presidential decision and will draw upon the broad disaster assistance authorities in the Foreign Assistance Act. According to USAID, these authorities allow the U.S. government to purchase commodities overseas to meet emergency food assistance needs. However, this authority does not extend to the United States’ fiscal year 2003 $2.6 billion food assistance programs under existing food assistance legislation. Humanitarian Daily Rations Were Expensive and Inefficient The U.S. Department of Defense’s humanitarian daily ration program was a largely ineffective and expensive component of the U.S. food assistance effort. The program was initiated to alleviate suffering and convey that the United States was waging war against the Taliban, not the Afghan people. However, the program’s public relations and military impact have not been formally evaluated. Airdrops of the humanitarian daily rations were intended to disperse the packets over a wide area, avoiding the dangers of heavy pallet drops or having concentrations of food fall into the hands of a few. On October 8, 2001, U.S. Air Force C-17s began dropping rations on various areas within Afghanistan. Drops averaged 35,000 packets per night (two planeloads) and ended on December 21, 2001. In 198 missions over 74 days, the Air Force dropped 2,489,880 rations (see fig. 8). According to WFP, one of the major problems with the ration program was the lack of any assessment to identify the needs of the target populations or their locations. WFP representatives were part of the coordination team located at Central Command in late 2001 when the airdrops were made. These representatives provided the Defense Department with general information on drought-affected areas but were not asked to provide information on specific areas to target. According to Department of Defense officials, the drop areas were selected based on consultations with USAID staff familiar with the situation in Afghanistan. Defense officials told us that the rations are an expensive and inefficient means of delivering food assistance and were designed to relieve temporary food shortages resulting from manmade or natural disasters— not, as in Afghanistan, to feed a large number of people affected by a long- term food shortage. Defense officials responsible for the ration program stated that the humanitarian, public relations, and military impact of the effort in Afghanistan had not been evaluated. According to these officials, anecdotal reports from Special Forces soldiers indicated that vulnerable populations did receive the food and that the rations helped to generate goodwill among the Afghan people. However, reports from nongovernmental organizations in Afghanistan indicated that often the rations went to the healthiest, since they were able to access the drop zone most quickly, and were hoarded by a few rather than distributed among the population. The cost of the rations was $4.25 per unit, or $10,581,990 for the approximately 2.5 million dropped. The total cost of the program was $50,897,769, or $20.44 per daily ration. Delivery cost is estimated at $16.19 per unit, based on the difference in the ration cost and the department’s total expenditure. The rations accounted for only 2,835 metric tons out of the total of 365,170 metric tons, or .78 percent of the total weight of food aid delivered in fiscal year 2002. However, the cost of the rations equals 28.6 percent of the $178,068,786 that USAID and USDA spent on emergency assistance to Afghanistan from October 2001 through September 2002. If the United States had bought traditional food assistance commodities regionally instead of dropping the 2,835 metric tons of rations, it could have purchased approximately 118,000 metric tons of food, enough to provide food assistance to 1.0 million people for 1 year. The U.S. and international community’s agricultural reconstruction efforts in Afghanistan have had limited impact, coordination of the assistance has been fragmented, and significant obstacles jeopardize Afghanistan’s long- term food security and political stability. Because of drought and adverse political conditions, agricultural assistance provided by the international community has not measurably improved Afghanistan’s long-term food security. In 2002, collective efforts to coordinate reconstruction assistance, especially with the Afghan government, were ineffectual and, as a result, no single operational strategy has been developed to manage and integrate international agricultural assistance projects. Finally, the inadequacy of proposed agricultural assistance, and the increase in domestic terrorism, warlords’ control of much of the country, and opium production all present obstacles to the international community’s goal of achieving food security and political stability in Afghanistan. For most of the period 1999–2002, because of war and drought, FAO, bilateral donors, and more than 50 nongovernmental organizations in Afghanistan focused resources primarily on short-term, humanitarian relief; consequently, the impact of this effort on the agricultural sector’s long-term rehabilitation was limited. The assistance was provided in an effort to increase short-term food security and decrease Afghanistan’s dependence on emergency food assistance. During most of the 4-year period, FAO provided $28 million in assistance to Afghanistan partly under the UN Development Program’s (UNDP) Poverty Eradication and Community Empowerment program and partly as donor-funded response to the drought. The poverty eradication program ended in 2002, but FAO continues its projects in Afghanistan. FAO’s short-term activities focus on efforts to enable war- and drought-affected populations to resume food production activities. These activities include providing agricultural inputs such as tools, seed, and fertilizer; controlling locusts; and making repairs to small-scale irrigation systems (see fig. 9). Its longer-term activities include, among other things, the establishment of veterinary clinics, assistance in the production of high-quality seed through 5,000 contracted Afghan farmers, and horticulture development. From 1999 to 2002, bilateral efforts focused on the distribution of agricultural inputs and the repair of irrigation systems. USAID activities currently include developing a market-based distribution system for agricultural inputs as well as distributing high- quality seed. As of March 2002, at least 50 of the approximately 400 national and international nongovernmental organizations working in Afghanistan were involved in agriculture-related assistance, including providing agricultural inputs, farmer training, microcredit, and the construction of wells. For most of the 4-year period, the rise of the Taliban, the continuing conflict with the Northern Alliance, and the ongoing drought prevented the international community from shifting from short-term relief projects to longer-term agricultural rehabilitation projects and reversed earlier advancements in agricultural production. For example, by 1997, agriculture in some areas had returned to prewar levels, and Afghanistan as a whole had reached 70 percent self-sufficiency in the production of cereals. At the time, assistance agencies were planning to implement longer-term assistance activities but were unable to do so owing to drought and conflict. These same factors resulted in decreases in cereal production and livestock herds of 48 percent and 60 percent, respectively, from 1998 through 2001. In 2002, a number of longer-term agricultural rehabilitation efforts were started, including efforts by USAID to reestablish agricultural input and product markets. However, these efforts have not been evaluated, and it is too early to determine their sustainability after donor assistance ends or their long-term impact. International assistance, including agricultural assistance, was not well coordinated in 2002, and, as a result, the Afghan government was not substantively integrated into the agricultural recovery effort and lacks an effective operational strategy. In December 2002, the Afghan government and the international community instituted a new mechanism, the Consultative Group, to improve coordination. However, the Consultative Group is similar in purpose and structure to a mechanism used earlier in 2002, the Implementation Group, and does not surmount the obstacles that prevented the Implementation Group’s success. Because of the lack of coordination, the Afghan government and the international community have not developed a single operational strategy to direct the agricultural rehabilitation effort; instead, all of the major assistance organizations have independent strategies. Although documents prepared by the Afghan government and others to manage assistance efforts contain some of the components of an effective operational strategy, these components have not been combined in a coherent strategy. The lack of an operational strategy hinders efforts to integrate projects, focus resources, empower Afghan government ministries, and make the international community more accountable. Despite efforts to synchronize multiple donors’ initiatives in a complex and changing environment, coordination of international assistance in general, and agricultural assistance in particular, was weak in 2002. According to the UN, assistance coordination refers to a recipient government’s integration of donor assistance into national development goals and strategies. From the beginning of the assistance effort in 2002, donors were urged to defer responsibility for assistance coordination to the Afghan government as stipulated in the Bonn Agreement. According to the UN, coordination rests with the Afghan government, efforts by the aid community should reinforce national authorities, and the international community should operate, and relate to the Afghan government, in a coherent manner rather than through a series of disparate relationships. The Security Council resolution that established the UN Assistance Mission in Afghanistan goes further; it states that reconstruction assistance should be provided through the Afghan government and urges the international community to coordinate closely with the government. In April 2002, the Afghan government attempted to exert leadership over the highly fragmented reconstruction process. To accomplish this task, the government published its National Development Framework. The framework provides a vision for a reconstructed Afghanistan and broadly establishes national goals and policy directions. The framework is not intended to serve as a detailed operational plan with specific objectives and tasks that must be pursued to accomplish national goals. Also, in 2002, the Afghan government established a government-led coordination mechanism, the Implementation Group (see app. VI for detailed descriptions and a comparison of the coordinating mechanisms). The intent of the Implementation Group was to bring coherence to the international community’s independent efforts and broad political objectives, such as ensuring Afghanistan does not become a harbor for terrorists. The mechanism’s structure was based on the National Development Framework. Individual coordination groups, led by Afghan ministers and composed of assistance organizations, were established for each of the 12 programs contained in the framework. The Implementation Group mechanism proved to be largely ineffective. Officials from the Afghan government, the UN, the Department of State, and USAID, as well as a number of nongovernmental bodies, expressed concern over the lack of meaningful and effective coordination of assistance in Afghanistan in 2002. For example, a high-ranking WFP official in Afghanistan said that coordination efforts since September 11, 2002, paid only “lip-service” to collaboration, integration, and consensus. In August 2002, the Ministers of Foreign Affairs, Rural Reconstruction and Development, Irrigation, and Agriculture stated that the donor community’s effort to coordinate with the government was poor to nonexistent. A USAID official characterized the coordination of reconstruction in 2002 as an “ugly evolution” and “the most complex post-conflict management system” he had ever seen. The ineffectiveness of the Implementation Group mechanism resulted from its inability to overcome several impediments. First, each bilateral, multilateral, and nongovernmental assistance agency has its own mandate, established by implementing legislation or charter, and sources of funding, and each agency pursues development efforts in Afghanistan independently. Second, the international community asserts that the Afghan government lacks the capacity and resources to effectively assume the role of coordinator and, hence, these responsibilities cannot be delegated to the government. Third, no single entity within the international community has the authority and mandate to direct the efforts of the myriad bilateral, multilateral, and nongovernmental organizations providing agricultural assistance to Afghanistan. Finally, efforts to coordinate agricultural assistance were further complicated because the Ministries of Agriculture, Irrigation, and Rehabilitation and Rural Development share responsibility for agriculture development. In December 2002, the Afghan government instituted a new coordination system, the Consultative Group mechanism. The overall objective of the Consultative Group in Afghanistan is to increase the effectiveness and efficiency of assistance coordination in support of goals and objectives contained in the National Development Framework. According to the Afghan government, the program-level consultative groups established under this mechanism provide a means by which the government can engage donors, UN agencies, and nongovernmental organizations to promote specific national programs and objectives presented in the government’s National Development Framework and the projects articulated in the Afghan National Development Budget. According to advisors to the Afghan government, the Consultative Group mechanism provides a real opportunity for donors to provide focused support for policy development, project preparation, implementation, monitoring, and evaluation. The Consultative Group mechanism in Afghanistan evolved out of the Implementation Group and is similar in its National Development Framework–based hierarchal structure, the role of the Afghan government, the membership and leadership of sector specific groups, and stated goals (see app. VI). One difference between the Implementation and Consultative Group mechanisms is that, since the establishment of the latter, the Afghan government has asked donor government and assistance organizations to categorize their assistance projects under the subprograms in the National Development Framework and to direct funding toward the projects in the Afghan National Development Budget. Despite the effort to develop a more effective coordination mechanism, the Consultative Group mechanism has not surmounted the conditions that prevented the Implementation Group from effectively coordinating assistance. For example, in 2003, donor governments and assistance agencies have continued to develop their own strategies, as well as fund and implement projects outside the Afghan government’s national budget. In addition, agricultural assistance is divided up among several consultative groups including the groups for natural resources management and livelihoods. Further, unlike food assistance where donors primarily use one agency, WFP, for channeling resources, donors continue to use a variety of channels for their agriculture assistance. Although the Afghan government asserts that it is assuming a greater level of leadership over the coordination effort, as of May 2003, we could not determine whether the new coordination mechanism would be more successful than earlier efforts. Because of the inadequate coordination of agricultural assistance, the Afghan government and the international community have not developed an operational agricultural sector strategy. Each assistance agency has published its own development strategy that addresses agriculture and numerous other sectors. The Consultative Group mechanism and the National Development Framework, as well as other documents prepared by the Afghan government and others to manage assistance efforts, contain some of the components of an effective operational strategy, such as measurable goals and impediments to their achievement. However, these components have not been incorporated in a single strategy. Without an integrated operational strategy, jointly developed by the Afghan government and the international community, the Afghan government lacks a mechanism to manage the agricultural rehabilitation effort, focus limited resources, assert its leadership, and hold the international donor community accountable. Assistance Agencies Have Developed Separate Strategies No donor has taken the lead in the agricultural sector; consequently, multilateral, bilateral, and nongovernmental organizations, including the UN, FAO, the Asian Development Bank, the World Bank, USAID, and others, have prepared individual strategies that address, to varying degrees, agricultural reconstruction and food security. However, these strategies lack measurable national goals for the sector and have not been developed jointly with the Afghan government. For example, in August 2002, the Minister of Agriculture stated, “The ministry does not know the priorities of the international community for the agricultural sector, how much money will be spent, and where the projects will be implemented.” FAO claimed that the Ministry of Agriculture had endorsed FAO’s agricultural rehabilitation strategy. However, no letter of agreement or memorandum of understanding between the FAO and the ministry documents the acceptance of the strategy. The Minister of Agriculture told us, in December 2002, that the ministry had not endorsed FAO’s latest strategy. Further, the Ministry of Agriculture presented a list of more than 100 prioritized rehabilitation projects to the international community. As of late December 2002, the international community had not responded regarding the ministry’s proposed projects. Components of an Operational Strategy Have Not Been Integrated into a Single Document Although Consultative Group mechanism–related documents, the Afghan National Development Framework, and other documents prepared by the Afghan government and others to manage assistance efforts contain some of the components of an effective operational strategy, these components have not been incorporated in a single strategy. For an operational agricultural strategy to be effective, all relevant stakeholders must participate in its formulation. In this case, stakeholders include the Afghan Ministries of Agriculture and Irrigation and key nongovernmental, multilateral, and bilateral development organizations. Further, such strategies must establish measurable goals, set specific time frames, determine resource levels, and delineate responsibilities. For example, in Afghanistan, one such goal might be to increase the percentage of irrigated land by 25 percent by 2004 through the implementation of $100 million in FAO-led irrigation projects in specific provinces. In addition, an operational strategy should identify external factors that could significantly affect the achievement of goals and include a schedule for future program evaluations. Stakeholders should implement the strategy through projects that support the measurable goals of the strategy and broader policy objectives, such as those contained in the Afghan Government’s National Development Framework (see fig. 10). The Implementation Group and its successor, the Consultative Group, as well as the National Development Framework and other documents, contain some of the essential elements of an operational strategy. These elements include the involvement of key stakeholders, the development of some measurable objectives, and the identification of external factors that could affect the achievement of goals. However, since the National Development Framework is a general national strategy and not a detailed operational strategy, it is sufficiently broad that any assistance to the agricultural sector could be considered supportive of the framework, even if the assistance were not well targeted or made no significant impact. In addition, the various elements of an effective operational strategy that are contained in the National Development Framework and other documents have not been effectively applied, nor has a single agricultural sector strategy incorporating all of these elements been developed. The UN Assistance Mission for Afghanistan’s management plan endorses the formulation of joint strategies for reconstruction. In late December 2002, Afghanistan’s Minister of Agriculture told us that he would welcome the development of a joint Afghan–international agricultural sector strategy containing clear objectives, measurable goals, concrete funding levels, and clearly delineated responsibilities. In January 2003, FAO’s Assistant Director-General of Technical Cooperation stated that FAO would welcome the opportunity to assist the Ministry of Agriculture in preparing a strategy. The Consultative Group mechanism could serve as a vehicle to support the development of such a strategy. In March 2003, Afghan government advisors told us that consultative groups could develop strategies based on the subprograms contained in the National Development Framework and National Development Budget. Proposals for the development of strategies pertaining to natural resources management, including agriculture, have been drafted, and support for these proposals is being sought from the international community. Lack of Operational Agricultural Sector Strategy Limits Integration and Oversight The lack of an operational agricultural sector strategy hinders efforts to integrate disparate projects, focus limited assistance resources, place Afghan government ministries in a leadership role, and make the international community more accountable to the Afghan government. In its October 2002 National Development Budget, the Afghan government cited the lack of a strategic framework for the natural resources management sector, including agriculture, as an impediment to rehabilitation. Absent an operational strategy, the Afghan government lacks a mechanism to integrate disparate projects into an effective agricultural rehabilitation manage finite resources so as to ensure the greatest return on guide the efforts of the international community and assert the Afghan government’s leadership in agricultural reconstruction. Finally, an operational agricultural sector strategy that includes measurable goals and the means to assess progress against those goals could increase accountability. Because no comprehensive integrated strategy exists, the Afghan government lacks the means to hold the international assistance community accountable for implementing the agricultural sector reconstruction effort and achieving measurable results. Major obstacles to the goal of a food-secure and politically stable Afghan state include inadequate assistance funding, as well as a volatile security situation, long-standing power struggles among warlords, and the rapid increase in opium production. Donor support has not met Afghanistan’s recovery and reconstruction needs, and future funding levels for agricultural assistance may be inadequate to achieve the goal of food security and political stability, primarily because assistance levels are based on what the international community is willing to provide rather than on Afghanistan’s needs. Meanwhile, the continued deterioration of the security situation, exacerbated by a rising incidence of terrorism, the resurgence of warlords, and near-record levels of opium production, are impeding reconstruction and threaten to destabilize the nascent Afghan government. Total assistance levels, including those for agricultural reconstruction, proposed at the Tokyo donors’ conference in January 2002 do not provide Afghanistan with enough assistance to meet its estimated needs. The preliminary needs assessment prepared for the January 2002 donor’s conference in Tokyo estimated that, in addition to humanitarian assistance such as food and shelter assistance, between $11.4 and $18.1 billion over 10 years would be needed to reconstruct Afghanistan (see table 2). Others have estimated that much more is required. For example, the Afghan government estimated that it would need $15 billion for reconstruction from 2003 through 2007. In January 2002, donors pledged $5.2 billion for the reconstruction of Afghanistan for 2002–2006, or slightly more than half of the base-case estimate for 5 years. For the period January 2002–March 2003, the donors pledged $2.1 billion (see app. VII for donor pledges and donations). As of March 2003, approximately 88 percent of the 2002 grant funding had been disbursed. However, only 27 percent, or $499 million, was spent on major reconstruction projects such as roads and bridges, which are essential for the export of Afghan agricultural commodities and the import of foreign agricultural supplies. Despite the importance that the United States and the international community attach to the Afghan reconstruction effort, Afghanistan is receiving less assistance than was provided for other recent postconflict, complex emergencies. For example, per capita assistance levels have ranged from $193 in Rwanda to $326 in Bosnia, compared with $57 for Afghanistan. Given that the livelihood of 22 million Afghans depends on agriculture, we estimated that if all of the assistance had been provided only to people engaged in agriculture, each person would have received $67 annually or about 18 cents per day for their daily subsistence and agriculture production efforts in 2002. If Afghanistan were to receive per capita aid consistent with the average amounts provided for other recent postconflict reconstruction efforts, in 2002 it would have received $6 billion in international assistance, and from 2002 to 2006 it would receive $30 billion, or nearly three times the base-case estimate. The funding proposed by donors for food security–related issues is limited and may be insufficient to achieve the long-term goals of the Afghan government and the international community. Despite the Afghan government’s estimated annual need of $500 million for agricultural rehabilitation, agricultural assistance for Afghanistan in 2003 may total approximately $230 million. Afghanistan’s President has emphasized that the goal of food security and political stability is the Afghan government’s overarching priority, and the United States and other donor governments recognize the strong link between stability and food security. According to the U.S. Department of State, reconstruction is an integral part of the campaign against terrorism: the U.S. policy goal in Afghanistan is to create a stable Afghan society that is not a threat to itself or others and is not a base for terrorism. Because the agricultural sector forms the core of the Afghan economy, the pace of the sector’s recovery will largely determine the rate of overall economic recovery. Sustained investment in the agricultural sector, particularly the rehabilitation, upgrading, and maintenance of the nation’s irrigation infrastructure, is essential for the recovery of the Afghan economy and the country’s long-term food security. Despite improvements in agricultural production in 2002, owing primarily to increased precipitation, the fundamental weakness of Afghanistan’s agricultural infrastructure continues to threaten overall recovery efforts. The Ministry of Agriculture estimates that it needs $5 billion over 10 years to complete 117 key projects and other efforts important for the recovery of the sector. Despite these costs, the 2003 Afghan development budget for natural resource management, including agriculture, is only $155 million. Since the budget is funded almost entirely by the donor community, the budget reflects what the government expects to receive from the international community, not the Afghan government’s actual need. Afghan government budget estimates indicate that the natural resources management budget will increase to $298 million in 2004 and $432 million in 2005. International donors have budgeted approximately $230 million for agriculture-related assistance in 2003. USAID considers adequate funding a prerequisite for the success of the assistance effort and plans to spend approximately $50 million on agriculture in 2003 and similar amounts in 2004 and 2005. USAID funding covers 32 percent of the Afghan government’s 2003 natural resources management program budget of $155 million but only 10 percent of the Afghan Ministry of Agriculture’s estimated annual needs of $500 million. The goal of a stable Afghan state is threatened by the rise in domestic terrorism, long-standing rivalries among warlords, and the rapid increase in opium production. In March 2002, in a report to the UN Security Council, the UN Secretary General stated that security will remain the essential requirement for the protection of the peace process in Afghanistan. One year later, in a report to the council, he stated that “security remains the most serious challenge facing the peace process in Afghanistan.” Others in the international community, including USAID, consider security as a prerequisite for the implementation of reconstruction efforts. In 2002 and early 2003, the deteriorating security situation was marked by terrorist attacks against the Afghan government, the Afghan people, and the international community. These incidents have forced the international community to periodically suspend agricultural assistance activities, disrupting the agricultural recovery effort. Meanwhile, clashes between the warlords’ private armies continue to destabilize the country and reduce the Afghan government’s ability to fund agricultural reconstruction. The warlords foster an illegitimate economy fueled by smuggling of arms, drugs, and other goods. They also illegally withhold hundreds of millions of dollars in customs duties collected at border points in the regions they control, depriving the central government of revenues needed to fund the country’s agricultural reconstruction. The warlords control private armies of tens of thousands of armed men. Across Afghanistan, approximately 700,000 Afghan men are armed, and half of these are combat trained. USAID considers the demobilization and integration of these armed men a prerequisite for the success of the international recovery effort. Currently, the unemployment rate in Afghanistan is estimated at 50 percent. Without a revitalization of the agricultural sector—the engine of the Afghan economy and the main source of employment—it is likely that these men will remain in the employ of the warlords. Another destabilizing force that affects agriculture is the illicit international trade in Afghan opiates. The drug trade was the primary income source of the Taliban and continues to provide income for terrorists and warlords. On January 17, 2002, the President of Afghanistan issued a decree stating that the existence of an opium-based economy was a matter of national security and should be fought by all means. During the 1990s, Afghanistan became the world’s leading opium producer accounting for approximately 70 percent of opium production worldwide. Despite being a central focus of a number of international donors engaged in Afghanistan, opium poppy eradication efforts implemented by the Afghan government and the international community in 2002 failed. In July 2002, one of Afghanistan’s vice presidents and leader of the Afghan government’s poppy eradication campaign, Haji Qadir, was assassinated. In October 2002, the UN Office for Drug Control and Crime Prevention estimated that, in 2002, Afghan farmers produced 3,400 metric tons of opium. This level of production equals or exceeds levels achieved in 9 of the last 10 years. Total 2002 revenue from opium production totaled $1.2 billion, an amount equivalent to 70 percent of total assistance to Afghanistan pledged for 2002, or nearly 220 percent more than the Afghan government’s 2003 operating budget. The UN Drug Control Program also estimated that the average poppy farmer earned $4,000 dollars from growing poppies in 2002. Owing to continuing drought, a poor agricultural marketing structure, and widespread poverty, farmers have turned to poppy cultivation to avoid destitution. Since the fall of the Taliban, irrigated acreage dedicated to wheat production has fallen by 10 percent, supplanted by opium poppies. In addition, it is estimated that 30 to 50 percent of Afghans are involved in opium cultivation. Many of the farmers continue to grow opium poppies because they lack the seed and fertilizer needed to grow alternative crops that generate revenues comparable to those from opium. The establishment of a new government in Afghanistan has provided the Afghan people, the international community, and the United States an opportunity to rebuild Afghanistan and create a stable country that is neither a threat to itself or its neighbors nor a harbor for terrorists. In 2002, U.S. and international food assistance averted famine, assisted the return of refugees, and helped to implement reconstruction efforts. However, U.S. food assistance and cargo shipping legislation limited the United States’ flexibility in responding quickly to the emergency and providing support to WFP; the legislation does not provide for purchasing commodities regionally or donating cash to the UN for procuring commodities and requires that U.S. commodities be shipped on U.S. flag vessels. Consequently, the costs of food assistance were higher and delivery times were greater, fewer commodities were purchased, and a smaller number of people received food assistance. In addition, a lack of timely and adequate overall donor support disrupted WFP’s food assistance efforts. Meanwhile, in 2003, six million people will require food assistance in Afghanistan. Because the economy remains overwhelmingly agricultural, the pace of recovery in the agricultural sector will largely determine the rate of Afghanistan’s overall recovery. Food assistance alone cannot provide food security; Afghanistan’s agricultural sector must be rehabilitated. Environmental and political problems have limited the impact of the international community’s agricultural assistance efforts. In addition, in 2002, the assistance efforts were not coordinated with each other or with the Afghan government. A new coordination mechanism established in December 2002 is largely similar to earlier mechanisms, and it is too recent for us to determine its effectiveness. Further, whereas U.S. and UN agencies, bilateral donors, and nongovernmental organizations have drafted numerous overlapping recovery strategies, no single Afghan government–supported strategy is directed toward the effort to rehabilitate the sector. Meanwhile, funding for the agricultural assistance effort is insufficient and the nascent Afghan government is plagued with problems stemming from domestic terrorism, the resurgence of warlords, and near- record levels of opium production. These obstacles threaten the recovery of the agricultural sector and the U.S. goals of achieving food security and political stability in Afghanistan. To increase the United States’ ability to respond quickly to complex emergencies involving U.S. national security interests, such as that in Afghanistan, Congress may wish to consider amending the Agricultural Trade Development and Assistance Act of 1954 (P.L. 83-480), as amended, to provide the flexibility, in such emergencies, to purchase commodities outside the United States when necessary and provide cash to assistance agencies for the procurement of non-U.S.-produced commodities. In addition, Congress may wish to amend the Merchant Marine Act of 1936, as amended, to allow waiver of cargo preference requirements in emergencies involving national security. These amendments would enable the United States to reduce assistance costs and speed the delivery of assistance, thus better supporting U.S. foreign policy and national security objectives. To increase the effectiveness of the agricultural assistance effort in Afghanistan, we recommend that the Secretary of State and the Administrator of the U.S. Agency for International Development work through the Consultative Group mechanism to develop a comprehensive international–Afghan operational strategy for the rehabilitation of the agricultural sector. The strategy should (1) contain measurable goals and specific time frames and resource levels, (2) delineate responsibilities, (3) identify external factors that could significantly affect the achievement of goals, and (4) include a schedule for program evaluations that assess progress against the strategy’s goals. We provided a draft of this report to WFP, Department of State, USDA, USAID, and Department of Defense and received written comments from each agency (see app. VIII, IX, X, XI, and XII respectively). We also received technical comments from USDA, the Departments of Defense and State, USAID, FAO, and the World Bank, and incorporated information as appropriate. Department of State, USDA, and USAID all commented on our matter for congressional consideration related to amending food assistance legislation. WFP supported our suggestion that Congress consider amending the Agricultural Trade Development and Assistance Act of 1954 to allow the provision of non-U.S. commodities when such action supports U.S. national security. However, State, USDA, and USAID did not support the recommendation. Specifically, although State accepted our evidence that purchasing commodities from the United States is not the most cost- effective method of providing assistance, it believes that further study of potential variables, such as regional customs fees, taxes, and trucking costs, that may negate cost-benefit savings is needed before the act is amended. USAID stated that an amendment is not necessary because other authorities under the Foreign Assistance Act allow the provision of cash, and the proposed $200 million Famine Fund announced by the President in February 2003 would also increase the flexibility of U.S. assistance programs. USDA stated that the flexibility to quickly respond to humanitarian crises can be achieved through means, such as amending cargo shipping legislation, that would not adversely affect the provision of U.S. commodities. Specifically, USDA suggested adding a national security waiver to the U.S. regulations that govern how U.S. assistance is transported to eliminate the requirement to use U.S. flag vessels in certain circumstances. We do not disagree that under broad disaster assistance legislation U.S. agencies may provide cash or purchase food aid commodities outside the United States. However, we maintain that amending the Agricultural Trade Development and Assistance Act of 1954 to allow the provision of cash or food commodities outside the United States will greatly improve U.S. flexibility in responding to crises that affect U.S. national security and foreign policy interests. The act is the principal authority for providing food assistance in emergency situations. In both 2002 and 2003 over $2 billion in food assistance, the preponderant amount of this type of assistance, was dispersed under this authority. Amending the act will provide the United States with more flexibility to respond rapidly and at lower cost to events that affect U.S. national security; this is particularly important given the number and magnitude of crises requiring food assistance and decreasing surpluses of U.S. commodities. We also agree with USDA that the cargo preference requirement adds additional cost to food assistance and should be waived in specific situations, and we have adjusted the matter for congressional consideration contained in the report on this issue. In its comments, USDA stated that the report did not provide enough evidence about the existence of surpluses in 2002 in the Central Asia region. It also stated that if the U.S. had procured greater levels of commodities with the savings accrued by purchasing regional versus U.S. - origin commodities, the additional commodities would have over burdened WFP’s logistics system while generating only “marginal savings in time and money.” We have added additional information on the 7.6 million metric ton 2002 grain surplus in Kazakhstan and Pakistan. We disagree with USDA’s assertions that additional regionally procured commodities would have taxed WFP’s logistics system and brought only marginal gains. In December 2002, while fighting between coalition forces, the Northern Alliance, and the Taliban was still occurring, and winter weather was complicating food deliveries, WFP delivered 116,000 metric tons of food to Afghan beneficiaries, in the single largest movement of food by WFP in a 1- month period. According to WFP, its Afghanistan logistics system was capable of routinely moving more than 50,000 metric tons of food per month. Further, we disagree with USDA’s statement that the potential savings in cost and time by purchasing commodities regionally are marginal. Savings from the elimination of ocean freight costs could have fed 685,000 people for 1 year, and commodities purchased regionally are delivered to beneficiaries within weeks of being purchased, compared with the 4 months that it can take for commodities purchased in the United States. WFP, the Department of State, USDA, and USAID all agreed with the report’s conclusion and recommendation pertaining to assistance coordination and the need to develop a joint international-Afghan agricultural rehabilitation strategy. WFP pointed out that although the international assistance effort may have been aided by better coordination in 2002, the overall level of assistance might have been too small in 2002 to have any long-term impact on the agricultural sector. Although USAID agreed with our recommendation, it stated it did not want to lead the strategy development effort. We believe that USAID should take an active and aggressive role in the development of a joint international–Afghan government strategy, because the United States is the largest donor to Afghanistan, agriculture rehabilitation is the focus of USAID’s assistance effort in Afghanistan, and the achievement of U.S. goals in Afghanistan is tightly linked to the rehabilitation of the country’s agricultural sector. According to USAID’s assistance strategy for Afghanistan, restoring food security is USAID’s highest priority. Finally, the Department of Defense focused its comments on the report’s discussion of the humanitarian daily ration program. Specifically, the Department of Defense stated that (1) the report incorrectly characterized the ration program as a food assistance program, (2) informal evaluations of the program indicated that the program alleviated hunger and generated goodwill from the Afghan people toward U.S. soldiers, and (3) although the funds used to purchase rations could have been used to purchase bulk food, the bulk food could not have been delivered to remote areas. The report discusses both the food assistance and nonfood assistance aspects of the rations program, and we have added information on page 30 about the goodwill generated by the rations to the report. Finally, as discussed on page 20 of the report, bulk food could have been delivered to remote areas during the period of time (October-December 2001) when the ration program was implemented. During the month of December 2001, WFP delivered 116,000 metric tons of food to Afghanistan, a level of food assistance that exceeds any 1-month total for any emergency operation in WFP’s history. We are sending copies of this report to the Honorable Richard J. Durbin, Ranking Minority Member, Subcommittee on Oversight of Government Management, the Federal Workforce, and the District of Columbia, Committee on Governmental Affairs, and to the Honorable Frank R. Wolf, Chairman, Subcommittee on Commerce, Justice, State, and the Judiciary, Committee on Appropriations, House of Representatives. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4347. Other GAO contacts and staff acknowledgments are listed in appendix XIII. To examine the management, cost, and sufficiency of U.S. and international food assistance since 1999, we reviewed documents obtained from the World Food Program (WFP) and the U.S. Agency for International Development (USAID). Specifically, we reviewed program documentation for recent emergency and special operations; WFP Afghanistan Country Office quarterly and annual reports; WFP’s Emergency Field Operations Manual and Food Aid in Emergencies Redbook; country office monitoring guidelines; Afghanistan area office strategies; memorandums of understanding and letters of agreement signed by WFP and United Nations (UN) agencies, nongovernmental organizations, and the Afghan government; and monitoring reports prepared by USAID staff. In addition, we analyzed project monitoring and loss data to determine the frequency of monitoring visits, the experience and education level of monitors, and the level of commodities lost versus those delivered. We did not verify the statistical data provided by WFP. We also reviewed donor resource contribution data for recent emergency and special operations. We contacted by e-mail, or spoke with, 14 Afghan and international nongovernmental organizations to obtain their views on the delivery of assistance, WFP monitoring and reporting, and overall assistance coordination issues. We interviewed WFP management and staff at WFP headquarters in Rome, Italy; at the Regional Bureau for the Mediterranean, Middle East, and Central Asia, in Cairo, Egypt; at the Country Office in Kabul, Afghanistan; and at the Area Office in Hirat, Afghanistan. We also interviewed USAID, U.S. Department of Agriculture (USDA) and U.S. Department of State staff in Washington, D.C., and Kabul; U.S. Department of Defense Staff in Washington; the International Security Afghanistan Force, UN Development Program (UNDP), and UN Assistance Mission in Afghanistan (UNAMA) staff in Kabul; and UN High Commissioner for Refugees staff in Kabul and Hirat. Finally, we visited WFP project sites and warehouses in Kabul and Hirat. The number of sites visited was limited because of constraints placed on our movement within Afghanistan by the U.S. Embassy because of security considerations. We also examined cost data provided by USDA and USAID. The data included commodity costs; total ocean freight charges; inland freight; internal transport, storage, and handling charges; and administrative support costs. We used the data to calculate two additional expenses, per USDA statements about the composition of costs and additional costs that are not stated on the data sheets. First, the "freight forwarder" fees represent 2.5 percent of the total cost of ocean freight. Thus, ocean freight charges were divided between freight forwarder fees (total freight minus total freight divided by 1.025) and actual freight costs (total freight minus freight forwarder fees). This was true for both USDA and USAID assistance. In the final analysis, the freight forwarder fee was included in the ocean freight cost because it is an expense that would not have been incurred if ocean shipping had not been used. Second, with each donation to WFP, USDA provides an administrative support grant at the rate of 7.5 percent of the total value of the donated commodities. We calculated these data accordingly. We checked all USAID and USDA data for validity, where possible to the level of individual shipment. We cross-checked USAID data with USDA data. (USAID typically provided only estimated costs for commodities for the period 1999–2002. Because USDA conducts almost all commodity purchases for USAID, USAID estimates the commodity costs at the time it places its order with USDA, based on the current market cost. However, because USDA provided actual costs for USAID purchases in 1999, 2000, and 2001, the USAID commodity costs we cited for 2002 are based on USAID's estimate.) We then compared the cost of the U.S.-purchased commodities with the cost of commodities purchased in the Central Asia region to determine whether any savings could have been realized by purchasing commodities regionally versus buying U.S. commodities. Finally, using the level of rations that WFP provides to returning refugees, 12.5 kilograms per month, we calculated the amount of food assistance that the United States could have purchased and the number of people that could have received food assistance if it had purchased commodities in the Central Asia region. Further, we examined the costs associated with the Department of Defense’s Afghan humanitarian daily ration program, implemented from October 2001 through December 2001. Using the level of rations that WFP provides to returning refugees, 12.5 kilograms per month, we calculated the amount of food assistance that the United States could have purchased and the number of people that could have received food assistance if it had purchased commodities in the Central Asia region. In addition, we reviewed relevant food assistance legislation including the Agricultural Trade Development and Assistance Act of 1954 (P.L. 83-480) to determine whether provisions in the law allowed the U.S. government to purchase commodities outside the United States or provide cash transfers to assistance agencies for the provision of commodities from sources other than U.S. suppliers. To assess U.S. and international agricultural assistance, coordination, strategies, and funding intended to help Afghanistan maintain stability and achieve long-term food security, we reviewed documentation provided by FAO, UNDP, and UNAMA; the World Bank; the Asian Development Bank; USAID; and the Afghan Ministries of Agriculture and Animal Husbandry, and Irrigation and Water Resources. We reviewed information pertaining to past and current coordination mechanisms in the Afghan government’s National Development Framework and National Development Budget. We examined the structure and content of the assistance strategies published by FAO, UNDP, UNAMA, the European Commission, the World Bank, Asian Development Bank, and USAID, and we examined the proposed funding levels contained in each strategy. Using the criteria contained in the U.S. Government Performance and Results Act, we examined the strategies to determine whether each contained the basic elements of an operational strategy articulated in the act. Further, we examined the overall assistance funding requirements contained in the January 2002 UNDP, World Bank, and Asian Development Bank Comprehensive Needs Assessment, which served as a guideline for international donor contributions for Afghanistan. We interpolated the funding projection data to construct annual aid flows, so that the cumulative totals were equal to those contained in the assessment. Assuming that the first year of data referred to 2002, we applied the U.S. gross domestic product deflator to convert the assumed current dollar figures into constant 2003 dollars. Further, we examined security reports produced by the Department of Defense and the UN, as well as the UN Office on Drugs and Crime report on opium production in Afghanistan, to determine the impact of warlords and opium production on food security and political stability. In addition, we discussed U.S. and international agricultural assistance efforts and food security issues with officials from USAID in Washington and Kabul; FAO in Rome and Kabul; UNDP and the Afghan Ministries of Communication, Foreign Affairs, Interior, Rural Rehabilitation and Development, and Irrigation and Water Resources in Kabul; and the Afghan Ministry of Agriculture in Kabul and Washington. We conducted our review from April 2002 through May 2003 in accordance with generally accepted government auditing standards. Free food is delivered to the most vulnerable populations. Malnourished children, pregnant and nursing mothers, and people undergoing treatment for tuberculosis and leprosy are provided with a blended mix of either milled corn and soy or wheat and soy, in addition to sugar and oil, through feeding centers, hospitals, clinics, and orphanages. Returning refugees, internally displaced persons, and people involved in the poppy industry, among others, reconstruct and rehabilitate irrigation canals, roads, and other infrastructure. The program provides wages in the form of food and tools. Men and women of the community decide which families should receive food. Able- bodied households contribute their labor to construct or rehabilitate an asset, such as an irrigation canal, that benefits the entire community. Those who cannot contribute labor also receive food, and they benefit from the community asset. Food is distributed to students in school to encourage families to send their children to school. To encourage families to support the education of females, additional food is provided to female students. Food is also provided to teachers to supplement their low salaries. Food is provided to women who participate in informal education activities including technical skills and literacy training. Food is exchanged for improved seed grown by contract farmers. The seed is then sold to other farmers. Daily rations of bread are provided to more than 250,000 people. Women operate 41 of the 100 bakeries. Approximately 270,000 civil servants were provided with pulses and oil to supplement their salaries and help the Afghan government reestablish itself. Food assistance is provided as part of a resettlement package to help people reestablish themselves in their home areas or chosen community. The World Food Program uses a number of mechanisms to minimize losses and ensure that its commodities are well managed. The mechanisms include real-time automated tracking, periodic monitoring visits to project sites, required periodic reports from implementing partners, and end-of- project evaluations. The program’s global automated tracking system, the Commodity Movement and Progress Analysis System, is intended to record and report all commodity movement, loss, and damage. Each WFP suboffice in Afghanistan has access to the system and employs a clerk dedicated to managing it. The system produces a number of reports, including stock, damage, and loss reports. WFP guidelines state that monitoring and reporting are essential parts of effective project management in the field, and it is WFP’s policy not to support any project that cannot be monitored. Monitoring activities are intended to assess the status of projects by comparing the actual implementation of activities to the project’s work plan. The responsibility for monitoring projects rests with the program’s country office in Kabul and five Afghan suboffices located in other cities. Each office employs between 6 and 24 local Afghan project monitors, and WFP has 22 program staff in Afghanistan who also monitor projects, in addition to their other duties. WFP’s Afghan country office has developed monitoring guidelines for its monitors and monitoring checklists for each type of activity (e.g., food-for-work, food-for-seed, food-for-asset-creation, food-for-education). According to WFP, monitoring visits include an examination of project inputs, current operations, outputs, and immediate effects. Specific monitoring activities include an examination of food stocks held by implementing partners. The monitors spot-check the weight of randomly selected bags in storage and compare the total stock held with WFP stock balance reports. The monitors also survey local markets to determine whether any WFP food is being resold rather than used by beneficiaries. Projects are monitored on a periodic basis. WFP tries to visit each project when it starts, during its implementation, and when it is completed. The WFP data that we examined indicated that, on average, 2.4 monitoring visits were conducted on all projects implemented between April 2002 and November 2002 in Afghanistan. In addition to requiring the project monitoring visits, WFP requires its implementing partners to report on the status of projects on a monthly basis. WFP project proposals and the letters of agreement signed by WFP and its implementing partners stipulate that monthly and end-of-project reports must be submitted to WFP. The end-of-project reports include an assessment of the achievement of project objectives and a breakdown of budget expenditures. Between 1998 and 2003, as circumstances in Afghanistan changed, the coordination processes utilized by the international community and the Afghan government evolved (see table 3 and figure 11). Beginning in 1998, the international community employed a strategy of Principled Common Programming among United Nations agency, nongovernmental, and bilateral donor programs. The international community’s aim was to establish priorities and projects based on agreed upon goals and principles that would form the UN’s annual consolidated appeal for assistance. To implement Principled Common Programming, a number of coordination mechanisms were established, including the Afghan Programming Body. The programming body consisted of the Afghan Support Group, 15 UN Representatives, and 15 nongovernmental organizations and was responsible for making policy recommendations on issues of common concern, supporting the UN’s annual consolidated appeal for donor assistance, and promoting coordination of assistance efforts. The Taliban government had no role in the programming body. The programming body was supported by a secretariat; working level operations were conducted by a standing committee and thematic groups responsible for analyzing needs, developing strategies and policies, and setting assistance priorities within their thematic areas (e.g., the provision of basic social services). The Afghan Programming Body and its standing committee were incorporated into the Implementation Group/Program Group process established in 2002. Table 3 describes the Afghan assistance coordination mechanisms in place in 2002. In December 2002, the Afghan government instituted the Consultative Group coordination process in Afghanistan. The process evolved out of the previous Implementation/Program group processes. (Table 4 compares the two processes.) The Consultative Group process retains the same basic hierarchical structure that was established under the Implementation Group process. For example, the new process includes 12 groups, each lead by an Afghan government minister, organized around the 12 programs contained in the Afghan government’s National Development Framework. In addition to the 12 groups, 2 consultative groups covering national security programs (i.e., the national army and police); and 3 national working groups on disarmament, demobilization, and reintegration; counternarcotics; and demining were established. Further, 5 advisory groups were also established to ensure that cross-cutting issues, such as human rights, are mainstreamed effectively in the work of the 12 consultative groups and reflected in the policy framework and budget. Each consultative group will assist in policy management, as well as monitoring the implementation of activities envisaged under the Afghan government’s national budget. The groups will assist in preparing the budget, provide a forum for general policy dialogue, monitor the implementation of the budget, report on indicators of progress for each development program, and elaborate detailed national programs. The groups, with assistance from the standing committee, will also focus on monitoring performance against benchmarks established by each group. Each lead ministry will select a focal point, or secretariat, organization from among donors and UN agencies. Each year, in March, the Afghanistan Development Forum, or national consultative group meeting, will be held to discuss the budget for the next fiscal year, review national priorities, and assess progress. At that time, the consultative groups will report to the Consultative Group Standing Committee. 3 years Russia did not pledge at Tokyo – Russian assistance has been primarily in-kind donations. The following are GAO’s comments on the letter from the United Nations World Food Program dated June 2, 2003. 1. Although changes in the coordination mechanism utilized in Afghanistan were introduced in 2003, the Afghan government and the international community still lack a common, jointly developed strategy for rehabilitating the agricultural sector. We believe that such a strategy, including measurable goals and a means to evaluate progress toward achieving the goals, is needed to focus limited resources and hold the international community accountable for the assistance it delivers. The following are GAO’s comments on the letter from the Department of State dated June 3, 2003. 1. The U.S. Agency for International Development (USAID) currently purchases limited amounts of regional food commodities in an effort to respond quickly to humanitarian emergencies. Commodities purchased in the United States by U.S. agencies must travel the same logistics networks as commodities purchased regionally. For example, U.S. commodities destined for Afghanistan in 2002 were shipped from the United States to the Pakistani port at Karachi and moved to their final destination via roads in Pakistan and Afghanistan. Commodities purchased in Pakistan followed the same transit routes. Hence, the overland shipping costs, such as for trucking, were the same for U.S. origin commodities and Pakistani commodities. Further, regional cash purchases of food would be made by U.S. government officials or World Food Program (WFP) officials, the same officials that currently handle hundreds of millions of dollars in assistance funds and millions of metric tons of commodities; we are not suggesting that cash be provided to local governments. Any purchases would be subject to U.S. and UN accountability procedures, as such purchases are currently; increasing the amount of commodities purchased locally would not by itself create an opportunity for corruption. The following are GAO’s comments on the letter from the United States Agency for International Development dated June 6, 2003. strategy recognizes the importance of agriculture sector rehabilitation to the achievement of the U.S. policy goals in Afghanistan, including a politically stable state that is not a harbor for terrorists. 4. We agree that other authorities allow USAID to provide cash or purchase assistance commodities outside the United States. However, we believe that amending the Agricultural Trade Development and Assistance Act of 1954 to allow the provision of cash or food commodities outside the United States will greatly improve U.S. flexibility in responding to crises affecting U.S. national security and foreign policy interests. The act is the principal authority for providing food assistance in emergency and nonemergency situations. Amending the act will provide a permanent provision in this authority allowing the United States to respond rapidly and in a cost-effective manner to events that affect U.S. national security. USAID cites the recently proposed $200 million Famine Fund as providing the flexibility that the United States needs to address humanitarian crises. However, the fund proposal indicates that the fund will target dire unforeseen circumstances related to famine; thus, the fund does not appear to be designed to respond to nonfamine crises involving large amounts of food aid or national security. The fund amounts to less than 10 percent of the $2.2 billion and $2.6 billion appropriated for U.S. food aid in 2002 and 2003, respectively, a period marked by an increasing number of humanitarian food crises—for example, in Afghanistan, southern Africa, and North Korea—that did not entail famine but that did, in some cases, affect U.S. national security. The Famine Fund is inadequate to respond to the increasing number and size of such crises. Meanwhile, the availability of commodities in the United States for food assistance has declined in 2003. Therefore, the need to procure commodities overseas in close proximity to affected countries has become more critical while also being more cost effective. The following are GAO’s comments on the letter from the Department of Agriculture dated June 10, 2003. 1. Although other legislation allows for the provision of cash or assistance commodities from non-U.S. sources, we believe that amending the Agricultural Trade Development and Assistance Act of 1954 to allow the provision of cash or food commodities outside the United States will greatly improve U.S. flexibility in responding to crises that affect U.S. national security interests. The act is the principal authority for providing food assistance in emergency and nonemergency situations. Amending the act will provide a permanent provision in this authority allowing the United States to respond rapidly and in a cost effective manner to events that affect U.S. national security. In addition, although the proposed $200 million Famine Fund may provide some additional flexibility for responding to humanitarian crises, the fund proposal indicates that the fund will target dire unforeseen circumstances related to famine. Thus, the fund does not appear to be designed to respond to nonfamine crises involving large amounts of food aid or national security. The fund amounts to less than 10 percent of the $2.2 billion and $2.6 billion appropriated for U.S. food aid in 2002 and 2003, respectively, a period marked by an increasing number of humanitarian food crises—for example, in Afghanistan, southern Africa, and North Korea—that did not entail famine but that did, in some cases, affect U.S. national security. 2. We agree with the U.S. Department of Agriculture (USDA) that the cargo preference requirement adds additional cost to food assistance and should be waived in specific situations, and we have adjusted the matter for congressional consideration to reflect this. As stated in the report, 19.6 percent of total food assistance costs in fiscal year 2002 were for ocean freight. These costs were incurred because of the requirement that assistance commodities must be purchased in the United States, and 75 percent of the purchased commodities by weight must be shipped on U.S.-flagged carriers. In previous reports, we analyzed the costs of cargo preference requirements on food assistance and demonstrated the negative impact of these costs on U.S. food aid programs. it had used the savings realized through the purchase of regional commodities versus U.S. commodities to procure additional commodities. Further, WFP has commodity quality control standards and would not purchase commodities with donor funds that were objectionable to the donor providing the funds. Finally, much of the wheat that was purchased in the United States was shipped in bulk to ports in Pakistan where it was bagged for final distribution in bags clearly marked “USA.” Wheat purchased regionally with U.S. funds was packaged in Pakistan in the same type of bags. Thus, any regional purchases could be packaged in appropriately marked bags in the country of origin or at a bagging facility in a transit country. WFP uses this practice in other regions, such as southern Africa. 6. WFP made regional purchases during late 2001, but it also made regional purchases during 2002. As stated in the report, the amount of food available for food assistance in 2003 is less than in 2002, while the need for food aid continues to grow around the world, most notably in southern Africa. In addition, even if the U.S. grain infrastructure system is able to respond to ongoing demands for food aid, purchasing U.S. origin commodities and shipping the commodities via expensive ocean freight is not the most cost effective or quickest means either of supplying food to hungry people or of achieving U.S. national security and foreign policy objectives, such as stability in Afghanistan. 7. We agree that the donor community faced challenges in engaging the Afghan government in 2002. We believe that the mechanisms currently in place, including the Consultative Group coordination mechanism, provide an environment where the international community and the Afghan government can engage in a joint strategy development effort. 8. The report’s description of Afghanistan’s agriculture sector is based on discussions with and documents obtained from FAO, Asian Development Bank, USAID, and Afghan government officials. We have adjusted the language in the report in response to USDA’s comments. The following are GAO’s comments on the letter from the Department of Defense dated June 10, 2003. 1. The report discusses both food assistance and nonfood assistance aspects of the Humanitarian Daily Ration program. On page 30 of the report, we state that the HDR program was initiated to alleviate suffering and convey that the United States waged war against the Taliban, not the Afghan people. Also, the HDR program is included with the U.S. Agency for International Development’s humanitarian programs in U.S. government tallies of total humanitarian assistance provided to Afghanistan. 2. Department of Defense officials responsible for the administration of the HDR program stated that no formal evaluation of the HDR program in Afghanistan has been conducted. In the report, we cite the informal reporting that provided the Department of Defense with some information about how the program was received by the Afghan people. We have added information about the goodwill that the HDRs generated according to the informal reports cited by the Department of Defense in its comments on the draft report. 3. The report describes how HDRs are designed to be used—to relieve temporary food shortages resulting from manmade or natural disasters—not, as in Afghanistan, to feed a large number of people affected by a long-term food shortage. Further, as discussed in the report, the World Food Program (WFP) has worked in Afghanistan for many years, and during that period it developed an extensive logistics system for delivering food throughout the country. Even during the rule of the Taliban, WFP was able to deliver food to remote areas including those controlled by the Northern Alliance. During the month of December 2001, while Department of Defense was delivering HDRs, WFP delivered 116,000 metric tons of food to Afghanistan, a level of food assistance that exceeds any 1-month total for any emergency operation in WFP’s history. As stated in the report, WFP’s logistics system was capable of delivering commodities to remote populations both by air or by donkey if necessary. In addition to the individuals named above, Jeffery T. Goebel, Paul Hodges, and Reid L. Lowe made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | After the events of September 11, 2001 led to the defeat of the Taliban, the United States and the international community developed an assistance program to support Afghanistan's new government and its people. Key components of this effort include food and agricultural assistance. GAO was asked to assess (1) the impact, management, and support of food assistance to Afghanistan and (2) the impact and management of agricultural assistance to Afghanistan, as well as obstacles to achieving food security and political stability. The emergency food assistance that the United States and the international community provided from January 1999 through December 2002 helped avert famine by supplying millions of beneficiaries with about 1.6 million tons of food. However, the inadequacy of the international community's financial and in-kind support of the World Food Program's (WFP) appeal for assistance disrupted the provision of food assistance throughout 2002. Because of a lack of resources, WFP reduced the amount of food rations provided to returning refugees from 150 kilograms to 50 kilograms. Meanwhile, as a result of the statutory requirement that U.S. agencies providing food assistance purchase U.S.-origin commodities and ship them on U.S.-flag vessels, assistance costs and delivery times were higher by $35 million and 120 days, respectively, than if the United States had provided WFP with cash or regionally produced commodities. Had the U.S. assistance been purchased regionally, an additional 685,000 people could have been fed for 1 year. The livelihood of 85 percent of Afghanistan's approximately 26 million people depends on agriculture. Over 50 percent of the gross domestic product and 80 percent of export earnings have historically come from agriculture. Over the 4-year period, because of continued conflict and drought, the international community provided primarily short-term agricultural assistance such as tools and seed. As a result, the assistance did not significantly contribute to the reconstruction of the agricultural sector. In 2002, agricultural assistance was not adequately coordinated with the Afghan government; a new coordination mechanism was established in December 2002, but it is too early to determine its effectiveness. As a result of the weak coordination, the Afghan government and the international community have not developed a joint strategy to direct the overall agricultural rehabilitation effort. Meanwhile, inadequate assistance funding, continuing terrorist attacks, warlords' control of much of the country, and the growth of opium production threaten the recovery of the agricultural sector and the U.S. goals of food security and political stability in Afghanistan. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
DOD is one of the largest and most complex organizations in the world, and is entrusted with more taxpayer dollars than any other federal department or agency. For fiscal year 2013, the department requested approximately $613.9 billion—$525.4 billion in spending authority for its base operations and an additional $88.5 billion to support overseas contingency operations, such as those in Iraq and Afghanistan. In support of its military operations, DOD performs an assortment of interrelated and interdependent business functions, such as logistics management, procurement, health care management, and financial management. As we have previously reported, the DOD systems environment that supports these business functions is overly complex and error prone, and is characterized by (1) little standardization across the department, (2) multiple systems performing the same tasks, (3) the same data stored in multiple systems, and (4) the need for data to be entered manually into multiple systems. The department recently requested about $17.2 billion for its business systems environment and IT infrastructure investments for fiscal year 2013.department’s systems inventory, this environment is composed of about 2,200 business systems and includes 310 financial management, 724 human resource management, 580 logistics, 254 real property and installation, and 287 weapon acquisition management systems. DOD currently bears responsibility, in whole or in part, for 14 of the 30 areas across the federal government that we have designated as high risk. Seven of these areas are specific to the department, and 7 other high-risk areas are shared with other federal agencies. Collectively, these high-risk areas relate to DOD’s major business operations that are inextricably linked to the department’s ability to perform its overall mission. Furthermore, the high-risk areas directly affect the readiness and capabilities of U.S. military forces and can affect the success of a mission. In particular, the department’s nonintegrated and duplicative systems impair its ability to combat fraud, waste, and abuse. As such, DOD’s business systems modernization is one of the department’s specific high-risk areas and is an essential enabler in addressing many of the department’s other high-risk areas. For example, modernized business systems are integral to the department’s efforts to address its financial, supply chain, and information security management high-risk areas. The department’s approach to modernizing its business systems environment includes developing and using a BEA and associated enterprise transition plan, improving business systems investment management, and reengineering the business processes supported by its defense business systems. These efforts are guided by DOD’s Chief Management Officer and Deputy Chief Management Officer (DCMO). The Chief Management Officer’s responsibilities include developing and maintaining a departmentwide strategic plan for business reform and establishing performance goals and measures for improving and evaluating overall economy, efficiency, and effectiveness, and monitoring and measuring the progress of the department. The DCMO’s responsibilities include recommending to the Chief Management Officer methodologies and measurement criteria to better synchronize, integrate, and coordinate the business operations to ensure alignment in support of the warfighting mission. The DCMO is also responsible for developing and maintaining the department’s enterprise architecture for its business mission area. The DOD Chief Management Officer and DCMO are to interact with several entities to guide the direction, oversight, and execution of DOD’s business transformation efforts, which include business systems modernization. These entities include the Defense Business Systems Management Committee, which is intended to serve as the department’s highest-ranking investment review and decision-making body for business systems programs and is chaired by the Deputy Secretary of Defense. The committee’s composition includes the principal staff assistants, defense agency directors, DOD Chief Information Officer (CIO), and military department Chief Management Officers. Table 1 describes key DOD business systems modernization governance entities and their composition. Since 2005, DOD has employed a “tiered accountability” approach to business systems modernization. Under this approach, responsibility and accountability for business architectures and systems investment management are assigned to different levels in the organization. For example, the DCMO is responsible for developing the corporate BEA (i.e., the thin layer of DOD-wide policies, capabilities, standards, and rules) and the associated enterprise transition plan. Each component is responsible for defining a component-level architecture and transition plan associated with its own tiers of responsibility and for doing so in a manner that is aligned with (i.e., does not violate) the corporate BEA. Similarly, program managers are responsible for developing program- level architectures and plans and for ensuring alignment with the architectures and transition plans above them. This concept is to allow for autonomy while also ensuring linkages and alignment from the program level through the component level to the corporate level. Consistent with the tiered accountability approach, the NDAA for Fiscal Year 2008 required the Secretaries of the military departments to designate the department Under Secretaries as Chief Management Officers with primary responsibility for business operations. Moreover, the Duncan Hunter NDAA for Fiscal Year 2009 required the military departments to establish business transformation offices to assist their Chief Management Officers in the development of comprehensive business transformation plans.departments have designated their respective Under Secretaries as the Chief Management Officers. In addition, the Department of the Navy (DON) and Army have issued business transformation plans. Air Force officials have stated that the department’s corporate Strategic Plan also serves as its business transformation plan. DOD’s BEA is intended to serve as a blueprint for DOD business transformation. In particular, the BEA is to guide and constrain implementation of interoperable defense business systems by, among other things, documenting the department’s business functions and activities, the information needed to execute its functions and activities, and the business rules, laws, regulations, and policies associated with its business functions and activities. According to DOD, the BEA is being developed using an incremental approach, where each new release addresses business mission area gaps or weaknesses based on priorities identified by the department. The department considers its current approach to developing the BEA both a “top-down” and “bottom-up” approach. Specifically, it focuses on developing content to support investment management and strategic decision making and oversight (“top-down”) while also responding to department needs associated with supporting system implementation, system integration, and software development (“bottom-up”). The department’s most recent BEA version (version 9.0), released in March 2012, focuses on documenting information associated with its 15 end-to-end business process areas. (See table 2 for a list and description of these business process areas.) In particular, the department’s most recent Strategic Management Plan has identified the Hire-to-Retire and Procure-to-Pay business process areas as its priorities. According to the department, the process of documenting the needed architecture information also includes working to refine and streamline each of the associated end-to-end business processes. In addition, DOD’s approach to developing its BEA involves the development of a federated enterprise architecture. Such an approach treats the architecture as a family of coherent but distinct member architectures that conform to an overarching architectural view and rule set. This approach recognizes that each member of the federation has unique goals and needs, as well as common roles and responsibilities with the levels above and below it. Under a federated approach, member architectures are substantially autonomous, although they also inherit certain rules, policies, procedures, and services from higher-level architectures. As such, a federated architecture gives autonomy to an organization’s components while ensuring enterprisewide linkages and alignment where appropriate. Where commonality among components exists, there are also opportunities for identifying and leveraging shared services. Figure 1 provides a conceptual overview of DOD’s federated BEA approach. The certification of business system investments is a key step in DOD’s IT investment selection process that the department has aimed to model after GAO’s Information Technology Investment Management (ITIM) framework. While defense business systems with a total cost over $1 million are required, as of June 2011, to use the Business Capability Lifecycle,are also subject to the formal review and certification process through the IRBs before funds are obligated for them. a streamlined process for acquiring systems, these systems Under DOD’s current approach to certifying investments, there are several types of certification actions as follows: Certify or certify with conditions: An IRB certifies the modernization as fully meeting criteria defined in the act and IRB investment review guidance (certify) or imposes specific conditions to be addressed by a certain time (certify with conditions). Recertify or recertify with conditions: An IRB certifies the obligation of additional modernization funds for a previously-certified modernization investment (recertify) or imposes additional related conditions to the action (recertify with conditions). Decertify: An IRB may decertify or reduce the amount of modernization funds available to an investment when (1) a component reduces funding for a modernization by more than 10 percent of the originally certified amount, (2) the period of certification for a modernization is shortened, or (3) the entire amount of funding is not to be obligated as previously certified. An IRB may also decertify a modernization after development has been terminated or if previous conditions assigned by the IRB are not met. Congress included provisions in the act, as amended, that are aimed at ensuring DOD’s development of a well-defined BEA and associated enterprise transition plan, as well as the establishment and implementation of effective investment management structures and processes. The act requires DOD to develop a BEA and an enterprise transition plan for implementing the identify each business system proposed for funding in DOD’s fiscal year budget submissions, delegate the responsibility for business systems to designated establish an investment review structure and process, and not obligate appropriated funds for a defense business system program with a total cost of more than $1 million unless the approval authority certifies that the business system program meets specified conditions. The act also requires that the Secretary of Defense annually submit to the congressional defense committees a report on the department’s compliance with the above provisions. In addition, the act sets forth the following responsibilities: the DCMO is responsible and accountable for developing and maintaining the BEA, as well as integrating business operations; the CIO is responsible and accountable for the content of those portions of the BEA that support DOD’s IT infrastructure or information assurance activities; the Under Secretary of Defense for Acquisition, Technology, and Logistics is responsible and accountable for the content of those portions of the BEA that support DOD’s acquisition, logistics, installations, environment, or safety and occupational health activities; the Under Secretary of Defense (Comptroller) is responsible and accountable for the content of those portions of the BEA that support DOD’s financial management activities or strategic planning and budgeting activities; and the Under Secretary of Defense for Personnel and Readiness is responsible and accountable for the content of those portions of the BEA that support DOD’s human resource management activities. Between 2005 and 2008, we reported that DOD had taken steps to comply with key requirements of the NDAA relative to architecture development, transition plan development, budgetary disclosure, and investment review, and to satisfy relevant systems modernization management guidance. However, each report also concluded that much remained to be accomplished relative to the act’s requirements and relevant guidance.areas. We made recommendations to address each of the In May 2009, we reported that the pace of DOD’s efforts in defining and implementing key institutional modernization management controls had slowed compared with progress made in each of the previous 4 years, leaving much to be accomplished to fully implement the act’s requirements and related guidance. In addition, between 2009 and 2011, we found that long-standing challenges we previously identified remained to be addressed. The corporate BEA had yet to be extended (i.e., federated) to the entire family of business mission area architectures, and the military departments had yet to address key enterprise architecture management practices and develop important content. Budget submissions included some, but omitted other, key information about business system investments, in part because of the lack of a reliable, comprehensive inventory of all defense business systems. The business system information used to support the development of the transition plan and DOD’s budget requests, as well as certification and annual reviews, was of questionable reliability. DOD and the military departments had not fully defined key practices (i.e., policies and procedures) related to effectively performing both project-level (Stage 2) and portfolio-based (Stage 3) investment management as called for in the ITIM. Business system modernizations costing more than $1 million continued to be certified and approved, but these decisions were not always based on complete information. Further, we concluded that certification and approval decisions may not be sufficiently justified because investments were certified and approved without conditions even though our prior reports had identified program weaknesses that were unresolved at the time of certification and approval. Accordingly, we reiterated existing recommendations and made additional recommendations to address each of these areas. DOD partially agreed with our recommendations and described actions being planned or under way to address them. Nonetheless, DOD’s business systems modernization efforts remain on our high-risk list due in part to issues such as those described above. Furthermore, in 2011, we reported that none of the military department enterprise architecture programs had fully satisfied the requirements of our Enterprise Architecture Management Maturity Framework and recommended that they each develop a plan to do so. Our recommendation further stated that if any department did not plan to address any element of our framework, that department should include a rationale for determining why the element was not applicable. DOD and Army concurred, and Air Force and DON did not. In this regard, DOD stated that Air Force and DON did not have a valid business case that would justify the implementation of all of our framework elements. However, Air Force and DON did not address why the elements called for by our recommendation should not be developed. Further, Army officials stated that the department had not yet issued a plan. To date, none of the military departments have addressed our recommendation. GAO-11-278. management processes. These management controls are vital to ensuring that DOD can effectively and efficiently manage an undertaking with the size, complexity, and significance of its business systems modernization, and minimize the associated risks. DOD continues to take steps to comply with the provisions of the Ronald W. Reagan NDAA for Fiscal Year 2005, as amended, and to satisfy relevant system modernization management guidance. However, despite undertaking activities to address NDAA requirements and its future vision; the department has yet to demonstrate significant results. Specifically, DOD has updated its BEA and is beginning to modernize its corporate business processes, but the architecture is still not federated through development of aligned subordinate architectures for each of the military departments, and it still does not include common definitions for key terms and concepts to help ensure that the respective portions of the architecture will be properly linked and aligned. has not included all business system investments in its fiscal year 2013 budget submission, due in part to an unreliable inventory of all defense business systems. has made limited progress regarding investment management policies and procedures and has not yet established the new organizational structure and guidance that DOD has reported will address statutory requirements. In addition, while DOD implemented a business process reengineering (BPR) review process, the department is not measuring and reporting its results. continues to describe certification actions for its business system investments based on limited information. has fewer staff than it identified as needed to execute its responsibilities for business systems modernization. Specifically, the office of the DCMO, which took over these responsibilities from another office that was disestablished in 2011, reported that it had filled only 82 of its planned 139 positions, with 57 positions vacant. DOD’s limited progress in developing and implementing its federated BEA, investment management policies and procedures, and our related recommendations is due in part, to the roles and responsibilities of key organizations and senior leadership positions being largely undefined. Furthermore, the impact of DOD’s efforts to reengineer its end-to-end business processes has yet to be measured and reported, and efforts to execute needed activities are limited by challenges in staffing the office of the DCMO. Until the long-standing institutional modernization management controls provided for under the act, addressed in our recommendations, and otherwise called for in best practices are fully implemented, it is likely that the department’s business systems modernization will continue to be a high-risk program. Among other things, the act requires DOD to develop a BEA that would cover all defense business systems and their related functions and activities and that would enable the entire department to (1) comply with all federal accounting, financial management, and reporting requirements and (2) routinely produce timely, accurate, and reliable financial information for management purposes. The BEA should also include policies, procedures, data standards, and system interface requirements that are to be applied throughout the department. In addition, the NDAA for Fiscal Year 2012 added requirements that the BEA include, among other things, performance measures that are to apply uniformly throughout the department and a target defense business systems computing environment for each of DOD’s major business processes. Furthermore, the act requires a BEA that extends to (i.e., federates) all defense organizational components and requires that each military department develop a well-defined enterprisewide business architecture and transition plan. According to DOD, achieving its vision for a federated business environment requires, among other things, creating an overarching that can effectively map the taxonomy and associated ontologiescomplex interactions and interdependencies of the department’s business environment. Such a taxonomy and ontologies will provide the various components of the federated BEA with the structure and common vocabularies to help ensure that their respective portions of the architecture will be properly aligned and coordinated. In April 2011, DOD provided additional guidance that calls for the use of ontologies for federating the BEA and asserting systems compliance. In addition, DOD guidance states that, because of the interrelationship among models and across architecture efforts, it is useful to define an overarching taxonomy with common definitions for key terms and concepts in the development of the architecture. The need for such a taxonomy and associated ontologies was derived from lessons learned from federation pilots conducted within the department that showed that federation of architectures was made much more difficult because of the use of different definitions to represent the same architectural data. In addition, we have previously reported that defining and documenting roles and responsibilities is critical to the success of enterprise architecture efforts. More specifically, our Enterprise Architecture Management Maturity Framework calls for a corporate policy that identifies the major players associated with enterprise architecture development, maintenance, and use and provides for a performance and accountability framework that identifies each player’s roles, responsibilities, and relationships and describes the results and outcomes for which each player is responsible and accountable. In 2009, we reported that the then-current version of the BEA (version 6.0) addressed, to varying degrees, missing elements, inconsistencies, and usability issues that we previously identified, but that gaps still remained. In March 2012, DOD released BEA version 9.0, which continues to address the act’s requirements. For example, version 9.0 organizes BEA content around its end-to-end business processes and adds additional content associated with these processes. For example, version 9.0 added the “Accept Purchase Request” subprocess and placed this subprocess in the context of its Procure- to-Pay end-to-end business process. In addition, the Hire-to-Retire end-to-end business process includes the subprocess “Manage Benefits,” which is linked to over 1,200 laws, regulations, and policies, as well as 11 subordinate business activities, such as “Manage Retirement Benefits.” As a result, users can navigate the BEA to identify relevant subprocesses for each end-to-end business process and determine important laws, regulations, and policies, business capabilities, and business rules associated with a given business process. includes enterprise data standards for the Procure-to-Pay and Hire-to- Retire end-to-end business processes. Specifically, as part of the Procure-to-Pay end-to-end business process, enterprise standards for Procurement Data and Purchase Request Data were added. In addition, for the Hire-to-Retire end-to-end business process, DOD updated the Common Human Resources Information Standards, which is a standard for representing common human resources management data concepts and requirements within the defense business environment. As a result, stakeholders can accelerate coordination and implementation of the high priority end-to-end business processes and related statutory requirements. uses a standardized business process modeling approach to represent BEA process models. For example, the BEA uses the business process modeling notation standard to create a graphical representation of the “Accept Goods and Services” business process. Using a modeling approach assists DOD in its effort to eventually support automated queries of architecture information, including business models and authoritative data, to verify investment compliance and validate system solutions. includes performance measures and milestones for initiatives in DOD’s Strategic Management Plan and relates the end-to-end business processes and operational activities documented in the BEA with the plan’s initiatives and performance measures. For example, the BEA identifies that the Procure-to-Pay end-to-end business process is related to the Strategic Management Plan’s measure to determine the percentage of contract obligations competitively awarded. This is important for meeting the act’s new requirements associated with performance measures and to enable traceability of BEA content to the Strategic Management Plan. DOD has defined a federated approach to its BEA that is to provide overarching governance across all business systems, functions, and activities within the DOD. This approach involves the use of semantic web technologies to provide visibility across its respective business architecture efforts. Specifically, this approach calls for the use of non- proprietary, open standards and protocols to develop DOD architectures to allow users to, among other things, locate and analyze needed architecture information across the department. Among other things, DOD’s approach calls for the corporate BEA, each end-to-end business process area (e.g., Procure-to-Pay), and each DOD organization (e.g., Army) to establish a common vocabulary and for the programs and initiatives associated with these areas to use this vocabulary when developing their respective system and architecture products. However, in 2011, we reported that each of the military departments had taken steps to develop architectural content, but that none had well-defined architectures to guide and constrain its business transformation initiatives. Further, since May 2011, the BEA has yet to be federated through development of aligned subordinate architectures for each of the military departments. Specifically, DON reported that it has not made any significant changes to its BEA content. Army reported that it has adopted the end-to-end processes as the basis of the Army BEA, and Air Force reported that it has added additional architecture business process content and mapped some of this content to the end-to-end processes. However, each has yet to fully satisfy the requirements of our Enterprise Architecture Management Maturity Framework. In addition, the BEA does not include other important content that will be needed for achieving the office of the DCMO’s vision for BEA federation. For example, While DOD has begun to develop a taxonomy that provides a hierarchical structure for classifying BEA information into categories, it has yet to develop an overarching taxonomy that identifies and describes all of the major terms and concepts for the business mission area. Further, version 9.0 does not include a systematic mechanism for evaluating and adding new taxonomy terms and rules for addressing ambiguous terms and descriptions. This is important since federation relies heavily on the use of taxonomy to provide the structure to link and align enterprise architectures across the business mission area, thus enabling architecture federation. Without an overarching taxonomy, there is an increased risk of not finding the most relevant content, thereby making the BEA less useful for making informed decisions regarding portfolio management and implementation of business systems solutions. DOD has begun to define corporate BEA ontologies and is developing ontologies in the human resources management area and for the U.S. Transportation Command. However, BEA 9.0 does not include ontologies for all business mission domains and organizations. According to DOD officials, each domain and organization will develop its own ontology. This is important since ontologies promote a comprehensive understanding of data and their relationships. In addition, they enable DOD to implement automated queries of information and integrate information across the department. However, DOD has yet to describe how military departments will be held accountable for executing tasks needed to be accomplished for establishing domain ontologies for their respective BEAs or whether these ontologies are also to be used for their respective corporate enterprise architecture efforts. Without these ontologies, there is an increased risk of not fully addressing the act’s requirements relating to integrating budget, accounting, and program information and systems and achieving DOD’s vision for a federated architecture. DOD officials acknowledged these issues and stated that future versions of the BEA will leverage semantic technologies to create and document a common vocabulary and associated ontology. However, the department has yet to describe how each of the relevant entities will work together in developing the needed taxonomy and ontology. In addition to describing certain content required to be in the BEA, as described earlier, the act assigns responsibility for developing portions of the BEA to various entities. The department has developed strategies that begin to document certain responsibilities associated with architecture federation. For example, the Global Information Grid Architecture Federation Strategy states that the DOD enterprise is responsible for establishing a governance structure for DOD architecture federation. The strategy also states that each mission area, such as the business mission area, is to develop and maintain mission area architectures, such as the BEA. However, given the many entities involved in BEA and DOD architecture federation, officials from the office of the DCMO have expressed concerns over who is accountable for achieving specific federation tasks and activities and how the new vision for BEA federation will be enforced. Another requirement of the NDAA for Fiscal Year 2005, as amended, is that DOD’s annual IT budget submission must include key information on each business system for which funding is being requested, such as the system’s precertification authority and designated senior official, the appropriation type and amount of funds associated with modernization and current services (i.e., operation and maintenance), and the associated Defense Business Systems Management Committee approval decisions. The department’s fiscal year 2013 budget submission includes a range of information for 1,657 business system investments, including the system’s name, approval authority, and appropriation type.submission also identifies the amount of the fiscal year 2013 request that is for development and modernization versus operations and maintenance and notes the certification status (e.g., approved, approved with conditions, not applicable, and withdrawn) and the Defense Business Systems Management Committee approval date, where applicable. However, similar to prior budget submissions, the fiscal year 2013 budget submission does not reflect all business system investments. To prepare the submission, DOD relied on business system investment information (e.g., funds requested, mission area, and system description) that the components entered into the department’s system used to prepare its budget submission (SNAP-IT). In accordance with DOD guidance and according to DOD CIO officials, the business systems listed in SNAP-IT should match the systems listed in the Defense Information Technology Portfolio Repository (DITPR)—the department’s authoritative business systems inventory. However, the DITPR data provided by DOD in March 2012 included 2,179 business systems. Therefore, SNAP-IT did not reflect about 500 business systems that were identified in DITPR. In 2009, we reported that the information between SNAP-IT and DITPR data repositories were not consistent and, accordingly, recommended that DOD develop and implement plans for reconciling and validating the completeness and reliability of information in its two repositories, and to include information on the status of these efforts in the department’s fiscal year 2010 report in response to the act. DOD agreed with the need to reconcile information between the two repositories and stated that it had begun to take actions to address this. In 2011, we reported that, according to the office of the DOD CIO, efforts to provide automated SNAP-IT and DITPR integration work were delayed due to increased SNAP-IT requirements in supporting the fiscal year 2012 budget submission and ongoing reorganization efforts within the department. DOD officials also told us that the department planned to restart the process of integrating the two repositories beginning in the third quarter of fiscal year 2011. Since that time, DOD CIO officials have reiterated the department’s commitment to integrating the two repositories and taken steps toward achieving this end. For example, the officials stated that they have added a field to the DITPR repository that allows components to identify an individual system as a defense business system. These officials added that this change, once fully implemented, will be a key to providing automated DITPR and SNAP-IT integration. The Deputy DOD CIO (Resources) has also sent memoranda to specific DOD components identifying systems listed in DITPR that are not properly associated with systems identified in SNAP-IT and requesting that the components take action to address these inconsistencies. Nevertheless, DOD CIO officials responsible for the DITPR and SNAP-IT repositories stated that efforts to integrate them continue to be limited by ongoing organizational changes and the time required to address new system requirements unrelated to integrating the repositories. For example, these officials cited slowdowns resulting from the recent disestablishment of DOD’s Networks and Information Integration organization, as well as time spent making adjustments to the SNAP-IT repository to accommodate new Office of Management and Budget reporting requirements.data are owned by the components and therefore it is ultimately the responsibility of the components to update their respective data. However, DOD has not established a deadline by which it intends to complete the integration of the repositories and validate the completeness and reliability of information. They added that all Until DOD has a reliable, comprehensive inventory of all defense business systems, it will not be able to ensure the completeness and reliability of the department’s IT budget submissions. Moreover, the lack of current and accurate information increases the risk of oversight decisions that are not prudent and justified. DOD has made limited progress in defining and implementing investment management policies and procedures as required by the act and addressed in our ITIM framework since our last review in 2011. In addition, while the department has reported its intent to implement a new organizational structure and guidance to address statutory requirements, this structure and guidance have yet to be established. DOD also continues to approve investments on the basis of BEA compliance assessments that have not been validated. Further, while DOD has conducted various BPR activities related to its business system investments and underlying business processes, the department has not yet begun to measure associated results. Thus, the extent to which these efforts have streamlined and improved the efficiency of the underlying business processes remains uncertain. The act requires DOD to establish an IRB and investment management processes that are consistent with the investment management provisions of the Clinger-Cohen Act of 1996. As we have previously reported, organizations that satisfy Stages 2 and 3 of our ITIM framework have the investment selection, control, and evaluation governance structures, and the related policies, procedures, and practices that are consistent with the investment management provisions of the Clinger-Cohen Act. We have used the framework in many of our evaluations, and a number of agencies have adopted it. In 2011, we reported that DOD had continued to establish investment management processes described in our ITIM framework but had not fully defined all key practices. For example, we reported that DOD had fully implemented two critical processes associated with capturing investment information and meeting business needs, and partially completed the Stage 2 critical process associated with instituting an investment board. However, the department had yet to address other critical processes, including those associated with selecting investments and providing investment oversight. Since 2011, DOD has not fully implemented any additional key practices.progress in addressing elements of our ITIM framework that we previously reported as unsatisfied. For example, Furthermore, the military departments have made very little In 2011, we reported that Air Force had implemented four key practices related to effectively managing investments as individual business system programs (Stage 2). The Air Force had also addressed a key practice associated with portfolio-level investment management (Stage 3) — assigning responsibility for the development and modification of IT portfolio selection criteria. However, it has not implemented any additional practices since that time. The Air Force has described its intent to change its IT investment management structure and form a new branch to lay the foundation for integrated, efficient IT portfolio management processes; however, according to Air Force officials, this office is not yet fully established and faces competing personnel issues within the department. Further, Air Force officials stated that they are working to update the department’s IT portfolio management and IT investment guidance, but the updates are not expected to be issued until November 2012. In 2011, we reported that DON had implemented four key practices related to effectively managing investments as individual business system programs (Stage 2) and one key practice related to managing IT investments as a portfolio of programs (Stage 3). Since that time, DON has not fully implemented any additional key practices. While the department demonstrated that it has documented policies and procedures related to establishing assessment standards to describe a program’s health (e.g., cost, schedule, and performance), these policies and procedures do not describe the enterprisewide IT investment board’s role in reviewing and making decisions based on this information. Such a description is important because the investment board has ultimate responsibility for making decisions about IT investments. In 2011, we reported that Army had implemented two key practices associated with capturing investment information. Specifically, it had established policies and procedures for collecting information about the department’s investments and had assigned responsibility for investment information collection and accuracy. These are activities associated with effectively managing investments as individual business system programs (Stage 2). However, with regard to managing IT investments as a portfolio of programs (Stage 3), the Army had not fully defined any of the five key practices. Further, since that time, the Army has not fully implemented any additional Stage 2 or Stage 3 practices. Army officials stated that the department has been focused on performing extensive portfolio reviews that are intended to inform many of the ITIM key practices and lead to updates of its investment management policies and procedures. As of April 2012, Army officials stated that the department had completed its first round of portfolio reviews. According to Army officials, the department has also worked to release its Business Systems Information Technology Implementation Plan, which is to provide details for its investment management strategy, due as part of the 2012 Army Campaign Plan; however, this plan has not yet been released. According to the department, the slow progress made on the investment management process at DOD and the military departments in the past year is due, in part, to the department’s activities to address the new NDAA for Fiscal Year 2012 requirements. Specifically, in April 2012, DOD reported that it was in the process of constituting a single IRB. According to DOD, this IRB is to replace the existing governance structure and is to be operational by October 2012. In addition, DOD reported that it intends to incrementally implement an expanded investment review process that analyzes business system investments using common decision criteria and establishes investment priorities while ensuring integration with the department’s budgeting process. The department has stated its intention to use our ITIM model to assess its ability to comply with its related investment selection and control requirements. Further, DOD officials stated that this new investment review process will encompass a portfolio-based approach to investment management that is to employ a structured methodology for classifying and assessing business investments in useful views across the department. DOD officials stated that an initial review of all systems requiring certification under the new NDAA requirements is also planned to be completed by the start of the new fiscal year. While the department has reported its intent to implement this new organizational structure and guidance to address statutory requirements and redefine the process by which the department selects, evaluates, and controls business systems investments, this structure and guidance have yet to be established. DOD officials stated that the process has not yet been completed because they want to make sure they consider the best approach for investment management going forward. Accordingly, DOD is taking a phased approach as described in the department’s congressional report, which it intends to fully implement by October 2012. While it is too soon to evaluate the department’s updated approach to business system investment management, we will further evaluate DOD’s progress in defining and implementing its updated investment review processes in our fiscal year 2013 report on defense business systems modernization. Until DOD redefines and implements its investment management processes by the established deadline and until the military departments make additional progress on their own investment management processes, it is unlikely that the thousands of DOD business system investments will be managed in a consistent, repeatable, and effective manner. Since 2005, DOD has been required to certify and approve all business system modernizations costing more than $1 million to ensure that they meet specific conditions defined in the act. This process includes asserting that an investment is compliant with the BEA. The department continues to approve investments on the basis of architecture compliance. However, the department’s policy and guidance associated with architecture compliance still does not call for compliance assertions to be validated and officials agreed that not all of the compliance information has been validated. Department officials stated that some information associated with the compliance process has been validated, such as information associated with complying with DOD’s Standard Financial Information Structure. recommendations that the department amend existing policy and requirements to explicitly call for such validation to occur. DOD agreed with our findings and recommendations and stated that it planned to assign validation responsibilities and issue guidance that described the methodology for performing validation activities. Nonetheless, the department has not yet addressed our recommendation. The Standard Financial Information Structure is intended to provide a standard financial management data structure and uniformity throughout DOD in reporting on the results of operations. Among other things, BEA compliance is important for helping to ensure that DOD programs have been optimized to support DOD operations. However, as we have reported, without proper validation of compliance assertions, there is an increased risk that DOD will make business system investment decisions based on information that is inaccurate and unreliable. Under DOD’s vision for a semantic BEA, described previously in this report, officials have stated that compliance validations will be conducted automatically using specialized software tools as program architecture artifacts are developed. However, until DOD achieves its semantic BEA vision and addresses our prior recommendation, compliance assertions will continue to be unvalidated. In addition to the requirement that covered business systems be certified and approved to be in compliance with the BEA, the act requires that the Chief Management Officer certify that these business systems have undergone appropriate BPR activities. BPR is an approach for redesigning the way work is performed to better support an organization’s mission and reduce costs. After considering an organization’s mission, strategic goals, and customer needs, reengineering focuses on improving an organization’s business processes. We have issued BPR guidance that, among other things, discusses the importance of having meaningful performance measures to assess whether BPR activities actually achieve In this regard, the act, as amended, identifies the intended results.intended results of BPR reviews such as ensuring that the business process to be supported by the defense business system will be as streamlined and efficient as practicable and the need to tailor commercial- off-the-shelf systems to meet unique requirements or incorporate unique interfaces has been eliminated or reduced to the maximum extent practicable. While DOD has conducted various BPR activities, including preparing BPR assessment guidance; conducting assessments to meet the act’s requirements; and performing other BPR efforts including refining its end- to-end business processes, the department has not yet begun to measure associated results. The department’s BPR activities are summarized as follows: DOD issued interim guidance in April 2010 and final guidance in April 2011 to assist programs in addressing the act’s BPR requirement.This guidance describes the types of documentation required for systems seeking certification, including a standardized BPR assessment form, and illustrates the process for submitting documentation for review and approval. DOD’s final BPR guidance related to system certification generally comports with key practices described in our guidance. For example, DOD’s guidance recognizes the importance of developing a clear problem statement and business case, analyzing the as-is and to-be environments, and developing a change management approach for implementing the new business process. Consistent with its guidance, DOD has begun to implement its BPR review process in an effort to meet the act’s requirements. Specifically, all systems in fiscal year 2011 submitted BPR assessment forms for review. In addition, the DCMO and military department Chief Management Officers are in the process of signing formal determinations that sufficient BPR was conducted with respect to each program. The department has also performed BPR to respond to specific needs that have been identified by departmental components and to refine its end-to-end business processes. For example, the Defense Commissary Agency, in cooperation with the Business Transformation Agency and now the office of the DCMO, used BPR to help formulate a future enterprise transition plan for the agency. In addition, DOD officials described activities to refine DOD’s debt management business process, which is part of the Budget-to-Report end-to-end process. The standardization of related business process models related to debt management led to updates in the latest BEA, which now provide tools that can be used to guide and constrain investments. While DOD has performed the BPR activities described above, the extent to which these efforts have streamlined and improved the efficiency of the underlying business processes remains uncertain because the department has yet to establish specific measures and report outcomes that align with the department’s efforts. For example, the department does not track information, such as the number of systems that have undergone material process changes or the number of interfaces reduced or eliminated as a result of BPR reviews. DOD officials noted that addressing these requirements has been challenging and measuring progress, such as the number of interfaces reduced, has not been a priority. However, until the department develops and reports on performance measures associated with the development of its end-to-end processes and their related BPR activities, the department and its stakeholders will not know the extent to which BPR is effectively streamlining and improving its end-to-end business processes as intended. Among other things, the act requires DOD to include, in its annual report to congressional defense committees, a description of specific actions the department has taken on each business system submitted for certification. As applicable in fiscal year 2011, the act required that modernization investments involving more than $1 million in obligations be certified by a designated approval authority as meeting specific criteria, such as whether or not the system is in compliance with DOD’s BEA and appropriate BPR efforts have been undertaken. Further, the act requires that the Defense Business Systems Management Committee approve each of these certifications. DOD’s annual report identifies that the Defense Business Systems Management Committee approved 198 actions to certify, decertify, or recertify defense business system modernizations. These 198 IRB certification actions represented a total of about $2.2 billion in modernization spending. Specifically, the annual report states that during fiscal year 2011, the Defense Business Systems Management Committee approved 58 unique certifications, 102 recertifications, and 38 decertifications—101 with and 97 without conditions. Examples of conditions associated with individual systems include conditions related to business process engineering and BEA compliance. While DOD has continued to report its certification actions, these actions have been based on limited information, such as unvalidated architecture compliance assertions, as discussed in the previous section. Until DOD addresses our prior recommendations, the department faces increased risk that it will not effectively be able to oversee its extensive business systems investments. Among other things, the act calls for the DCMO to be responsible and accountable for developing and maintaining the BEA, as well as integrating defense business operations. Although responsibility for these activities previously resided with the Business Transformation Agency, DOD announced the disestablishment of this agency in August 2010. In June 2011, we recommended that DOD expeditiously complete the implementation of the announced transfer of functions of the agency and provide specificity as to when and where these functions will be transferred.structure consisting of a front office and six directorates and identified the staff resources it would need to fulfill its new responsibilities, which became effective in September 2011. Subsequently, the DCMO defined an organizational However, the office reported that it has not yet filled many of the positions needed to execute these responsibilities. In particular, as of April 2012, the office reported that it had filled only 82 of its planned 139 positions, with 57 positions (41 percent) remaining unfilled. had filled only 12 of 43 positions within its Technology, Innovation, and Engineering Directorate; which, among other things, is responsible for developing the BEA. Further, only 10 of 19 positions within the Planning and Performance Management Directorate, 14 of 22 positions within its Business Integration Directorate, and 16 of 23 positions within its Investment and Acquisition Management Directorate had been filled. Table 3 identifies the key responsibilities of each DCMO organizational component as well as planned and actual staffing. Establishing a well-defined, federated BEA and modernizing DOD’s business systems and processes are critical to effectively improving the department’s business systems environment. The department is taking steps to establish such a business architecture and modernize its business systems and processes, but long-standing challenges remain. Specifically, while DOD had made progress in developing its corporate enterprise architecture, it has yet to be federated through the development of aligned subordinate architectures for each of the military departments. The department has also taken effective steps to establish an infrastructure for establishing a federated BEA, including documenting a vision for the BEA and developing content around its end-to-end business processes. However, the department’s ability to achieve its federated BEA vision is limited by the lack of common definitions for key terms and concepts to help ensure that each of the respective portions of the architecture will be properly linked and aligned, as well as by the absence of a policy that clarifies roles, responsibilities, and accountability mechanisms. In addition, information used to support the development of the DOD’s budget requests continues to be of questionable reliability and no deadline for validating reliable information has been set. DOD has also not implemented key practices from our ITIM framework since our last review in 2011. Further, while the department has begun taking steps to reengineer its business systems and processes, and has issued sound guidance for conducting BPR associated with individual business systems, it has yet to measure and report on the impact these efforts have had on streamlining and simplifying its corporate business processes. Finally, the efforts of the office of the DCMO have been impacted by having fewer staff than the office identified as needed to support departmentwide business systems modernization. Collectively, these limitations continue to put the billions of dollars spent annually on about 2,200 business system investments that support DOD functions, such as departmentwide financial management and military personnel health care at risk. Our previous recommendations to the department have been aimed at accomplishing these and other important activities related to its business systems modernization. While the department has agreed with these recommendations, its progress in addressing the act’s requirements, its vision for a federated architecture, and our related recommendations is limited, in part, by continued uncertainty surrounding the roles and responsibilities of key organizations and senior leadership positions. In light of this, it is essential that the Secretary of Defense issue a policy that resolves these issues, as doing so is necessary for the department to establish the full range of institutional management controls needed to address its business systems modernization high-risk area. It is equally important that DOD measure the impact of its BPR efforts and include information on the results of these efforts and its efforts to fully staff the office of the DCMO in the department’s annual report in response to the act. Because we have existing recommendations that address many of the institutional management control weaknesses discussed in this report, we reiterate those recommendations. In addition, to ensure that DOD continues to implement the full range of institutional management controls needed to address its business systems modernization high-risk area, we recommend that the Secretary of Defense ensure that the Deputy Secretary of Defense, as the department’s Chief Management Officer, establish a policy that clarifies the roles, responsibilities, and relationships among the Chief Management Officer, Deputy Chief Management Officer, DOD and military department Chief Information Officers, Principal Staff Assistants, military department Chief Management Officers, and the heads of the military departments and defense agencies, associated with the development of a federated BEA. Among other things, the policy should address the development and implementation of an overarching taxonomy and associated ontologies to help ensure that each of the respective portions of the architecture will be properly linked and aligned. In addition, the policy should address alignment and coordination of business process areas, military department and defense agency activities associated with developing and implementing each of the various components of the BEA, and relationships among these entities. To ensure that annual budget submissions are based on complete and accurate information, we recommend that the Secretary of Defense direct the appropriate DOD organizations to establish a deadline by which it intends to complete the integration of the repositories and validate the completeness and reliability of information. To facilitate congressional oversight and promote departmental accountability, we recommend that the Secretary of Defense ensure that the Deputy Secretary of Defense, as the department’s Chief Management Officer, direct the Deputy Chief Management Officer to include in DOD’s annual report to Congress on compliance with 10 U.S.C. § 2222, the results of the department’s BPR efforts. Among other things, the results should include the department’s determination of the number of systems that have undergone material process changes, the number of interfaces eliminated as part of these efforts (i.e., by program, by name), and the status of its end-to-end business process reengineering efforts, and an update on the office of the DCMO’s progress toward filling staff positions and the impact of any unfilled positions on the ability of the office to conduct its work. In written comments on a draft of this report, signed by the Deputy Chief Management Officer and reprinted in appendix II, the department partially concurred with our first recommendation, concurred with our second and third recommendations, and did not concur with the remaining recommendation. The department partially concurred with our first recommendation to establish a policy that clarifies the roles, responsibilities, and relationships among its various management officials associated with the development of a federated BEA. In particular, the department stated its belief that officials’ roles, relationships, and responsibilities are already sufficiently defined through statute, policy, and practice, and that additional guidance is not needed. However, the department added that it will continue to look for opportunities to strengthen and expand guidance, to include the new investment management and architecture processes. We do not agree that officials’ roles, relationships, and responsibilities are sufficiently defined in existing policy. For example, we found that DOD has not developed a policy that fully defines the roles, responsibilities, and relationships associated with developing and implementing the BEA. Moreover, in our view, responsibility and accountability for architecture federation will not be effectively addressed with additional guidance because guidance cannot be enforced. Rather, we believe a policy, which can be enforced, will more effectively establish responsibility and accountability for architecture federation. Without a policy, the department risks not moving forward with its vision for a federated architecture. Thus, we continue to believe our recommendation is warranted. The department concurred with our second recommendation, to establish a deadline by which it intends to complete the integration of the repositories and validate the completeness and reliability of information, and described commitments and actions being planned or under way. We support the department’s efforts to address our recommendation and reiterate the importance of following through in implementing the recommendation within the stated time frame. DOD also concurred with our third recommendation that the Deputy Secretary of Defense, as the department’s Chief Management Officer, direct the Deputy Chief Management Officer to include the results of the department’s BPR efforts in its annual report to Congress. However, the department stated that given the passage of the NDAA for Fiscal Year 2012, BPR authority now rests with the military department Chief Management Officers. As such, DOD stated that it would be appropriate for the recommendation to be directed to the BPR owners. We agree that the act requires the appropriate precertification authority for each covered business system to determine that appropriate BPR efforts have been undertaken. However, we disagree that our recommendation should be directed to the BPR owners. The recommendation is not intended to be prescriptive as to who should measure the impact of the BPR efforts. Rather, it calls for the reporting of the results of such efforts in the department’s annual report to Congress, which is prepared by the office of the DCMO under the department’s Chief Management Officer. The department did not concur with our fourth recommendation to provide an update on the office of the DCMO’s progress toward filling staff positions and the impact of any unfilled positions in its annual report to Congress. DOD stated that it does not believe that the annual report is the appropriate communication mechanism; however, it offered to provide us with an update. While we support the department’s willingness to provide us with an update, we, nonetheless, stand by our recommendation. The purpose of the annual report is to document the department’s progress in improving its business operations through defense business systems modernization. Thus, the potential for staffing shortfalls in the office of the DCMO to adversely impact the department’s progress should be communicated to the department’s congressional stakeholders as part of the report. Including information about the department’s progress in staffing the office that was recently established to be responsible for business systems modernization would not only facilitate congressional oversight, but also promote departmental accountability. We are sending copies of this report to the appropriate congressional committees; the Director, Office of Management and Budget; the Secretary of Defense; and other interested parties. This report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions on matters discussed in this report, please contact me at (202) 512-6304 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. As agreed with the congressional defense committees, our objective was to assess the Department of Defense’s (DOD) actions to comply with key aspects of section 332 of the National Defense Authorization Act (NDAA) for Fiscal Year 2005 (the act), as amended, 10 U.S.C. § 2222 and related federal guidance. These include (1) developing a business enterprise architecture (BEA) and a transition plan for implementing the architecture, (2) identifying systems information in its annual budget submission, (3) establishing a system investment approval and accountability structure along with an investment review process, and (4) certifying and approving any business system program costing in excess of $1 million. (See the background section of this report for additional information on the act’s requirements.) Our methodology relative to each of the four provisions is as follows: To address the architecture, we analyzed version 9.0 of the BEA, which was released on March 15, 2012, relative to the act’s specific architectural requirements and related guidance that our previous annual reports in response to the act identified as not being fully implemented. Specifically, we interviewed office of the Deputy Chief Management Officer (DCMO) officials and reviewed written responses and related documentation on steps completed, under way, or planned to address these weaknesses. We then reviewed architectural artifacts in BEA 9.0 to validate the responses and identify any discrepancies. We also determined the extent to which BEA 9.0 addressed 10 U.S.C. § 2222, as amended by the NDAA for Fiscal Year 2012. In addition, we analyzed documentation and interviewed knowledgeable DOD officials about efforts to establish a federated business mission area enterprise architecture. Further, we reviewed the military departments’ responses regarding actions taken or planned to address our previous recommendations on the maturity of their respective enterprise architecture programs. We did not determine whether the DOD Enterprise Transition Plan addressed the requirements specified in the act, because an updated plan was not released during the time we were conducting our audit work. See, for example, GAO-09-586 and GAO-11-684. To determine whether DOD’s fiscal year 2013 IT budget submission was prepared in accordance with the criteria set forth in the act, we reviewed and analyzed the Report on Defense Business System Modernization Fiscal Year 2005 National Defense Authorization Act, Section 332, dated March 2012, and compared it with the specific requirements in the act. We also compared information contained in the department’s system that is used to prepare its budget submission (SNAP-IT) with information in the department’s authoritative business systems inventory (DITPR) to determine if DOD’s fiscal year 2013 budget request included all business systems and assessed the extent to which DOD has made progress in addressing our related recommendation. In addition, we reviewed DOD’s budget submission to determine the extent to which it addresses 10 U.S.C. § 2222, as amended by the NDAA for Fiscal Year 2012. We also analyzed selected business system information contained in DITPR, such as system life cycle start and end dates, to validate the reliability of the information. We also interviewed officials from the office of DOD’s Chief Information Officer (CIO) to discuss the accuracy and comprehensiveness of information contained in the SNAP-IT system, the discrepancies in the information contained in the DITPR and SNAP-IT systems, and efforts under way or planned to address these discrepancies. To assess the establishment of DOD enterprise and component investment management structures and processes, we followed up on related weaknesses that our previous reports in response to the act have identified as not being fully implemented. Specifically, we interviewed the office of the DCMO and military department officials and reviewed written responses and related documentation on steps completed, under way, or planned to address these weaknesses. We also met with cognizant officials on steps taken to address new investment management requirements of the NDAA for Fiscal Year 2012. Further, we reviewed DOD’s most recent BEA compliance guidance to determine the extent to which it addressed our related open recommendations. Finally, we reviewed business process reengineering documentation provided to support assertions that modernization programs had undergone business process reengineering assessments. To determine whether the department was certifying and approving business system investments with annual obligations exceeding $1 million, we reviewed and analyzed all Defense Business Systems Management Committee certification approval memoranda. We also reviewed IRB certification memoranda issued prior to the Defense Business Systems Management Committee’s final approval decisions for fiscal year 2011. We contacted officials from the office of the DCMO and investment review boards to discuss any discrepancies. In addition, we discussed with officials from the office of the DCMO its plans for updating the investment review process consistent with requirements of the NDAA for Fiscal Year 2012 and obtained related documentation. To assess the office of the DCMO’s progress toward filling staff positions, we compared the number of authorized positions with the staff on board as of late April 2012; reviewed and analyzed related staffing documentation; and interviewed office of the DCMO officials about staffing. We did not independently validate the reliability of the cost and budget figures provided by DOD because the specific amounts were not relevant to our findings. We conducted this performance audit at DOD offices in Arlington and Alexandria, Virginia, from September 2011 to June 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. In addition to the individual named above, Neelaxi Lakhmani and Mark Bird, Assistant Directors; Debra Conner; Rebecca Eyler; Michael Holland; Anh Le; Donald Sebers; and Jennifer Stavros-Turner made key contributions to this report. | For decades, DOD has been challenged in modernizing its business systems. Since 1995, GAO has designated DODs business systems modernization program as high risk, and it continues to do so today. To assist in addressing DODs business system modernization challenges, the National Defense Authorization Act for Fiscal Year 2005 requires the department to take certain actions prior to obligating funds for covered systems. It also requires DOD to annually report to the congressional defense committees on these actions and for GAO to review each annual report. In response, GAO performed its annual review of DODs actions to comply with the act and related federal guidance. To do so, GAO reviewed, for example, the latest version of DODs business enterprise architecture, fiscal year 2013 budget submission, investment management policies and procedures, and certification actions for its business system investments. The Department of Defense (DOD) continues to take steps to comply with the provisions of the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005, as amended, and to satisfy relevant system modernization management guidance. While the department has initiated numerous activities aimed at addressing the act, it has been limited in its ability to demonstrate results. Specifically, the department released its most recent business enterprise architecture version, which continues to address the acts requirements and is consistent with the departments future vision for developing its architecture. However, the architecture has not yet resulted in a streamlined and modernized business systems environment, in part, because DOD has not fully defined the roles, responsibilities, and relationships associated with developing and implementing the architecture. included a range of information for 1,657 business system investments in its fiscal year 2013 budget submission; however, it does not reflect about 500 business systems, due in part to the lack of a reliable, comprehensive inventory of all defense business systems. has not implemented key practices from GAOs Information Technology Investment Management framework since GAOs last review in 2011. In addition, while DOD has reported its intent to implement a new organizational structure and guidance to address statutory requirements, this structure and guidance have yet to be established. Further, DOD has begun to implement a business process reengineering review process but has not yet measured and reported results. continues to describe certification actions in its annual report for its business system investments as required by the actDOD approved 198 actions to certify, decertify, or recertify defense business system modernizations, which represented a total of $2.2 billion in modernization spending. However, the basis for these actions and subsequent approvals is supported with limited information, such as unvalidated architectural compliance assertions. lacks the full complement of staff it identified as needed to perform business systems modernization responsibilities. Specifically, the office of the Deputy Chief Management Officer, which took over these responsibilities from another office in September 2011, reported that 41 percent of its positions were unfilled. DODs progress in modernizing its business systems is limited, in part, by continued uncertainty surrounding the departments governance mechanisms, such as roles and responsibilities of key organizations and senior leadership positions. Until DOD fully implements governance mechanisms to address these long-standing institutional modernization management controls provided for under the act, addressed in GAO recommendations, and otherwise embodied in relevant guidance; its business systems modernization will likely remain a high-risk program. GAO recommends that the Secretary of Defense take steps to strengthen the departments mechanisms for governing its business systems modernization activities. DOD concurred with two of GAOs recommendations and partially concurred with one, but did not concur with the recommendation that it report progress on staffing the office responsible for business systems modernization to the congressional defense committees. GAO maintains that including staffing progress information in DODs annual report will facilitate congressional oversight and promote departmental accountability. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
While the 14 selected initiatives varied in terms of their purpose, sector, and partners involved, the boards and their partners cited common factors that facilitated and sustained collaboration. These were (1) a focus on urgent, common needs; (2) leadership; (3) the use of leveraged resources; (4) employer-responsive services; (5) minimizing administrative burden; and (6) results that motivated the partners to continue their collaboration. With regards to focusing on urgent, common needs, almost all of the collaborations grew out of efforts to address urgent workforce needs of multiple employers in a specific sector, such as health care, manufacturing, or agriculture, rather than focusing on individual employers (see table 1). The urgent needs ranged from a shortage of critical skills in health care and manufacturing to the threat of layoffs and business closures. In San Bernardino, California, for example, some companies were at risk of layoffs and closures because of declining sales and other conditions, unless they received services that included retraining for their workers. In one case, employers in Gainesville, Florida, joined with the board and others to tackle the need to create additional jobs by embarking on an initiative to develop entrepreneurial skills. According to those we interviewed, by focusing on common employer needs across a sector, the boards and their partners produced innovative labor force solutions that, in several cases, had evaded employers who were trying to address their needs individually. In several cases, employers cited the recruitment costs they incurred by competing against each other for the same workers. By working together to develop the local labor pool they needed, the employers were able to reduce recruitment costs in some cases. Boards also facilitated collaboration by securing leaders who had the authority or the ability, or both, to persuade others of the merits of a particular initiative, as well as leaders whose perceived neutrality could help build trust. Officials from many initiatives emphasized the importance of having the right leadership to launch and sustain the initiative. For example, in Northern Virginia, a community college president personally marshaled support from area hospital chief executive officers and local leaders to address common needs for health care workers. Another factor that facilitated collaboration was the use of leveraged resources. All of the boards and their partners we spoke with launched or sustained their initiatives by leveraging resources in addition to or in lieu of WIA funds. In some cases, partners were able to use initial support, such as discretionary grants, to attract additional resources. For example, in Golden, Colorado, the board leveraged a Labor discretionary grant of slightly more than $285,000 to generate an additional $441,000 from other partners. In addition to public funds, in all cases that we reviewed, employers demonstrated their support by contributing cash or in-kind contributions. In all cases, boards and their partners provided employer-responsive services to actively involve employers and keep them engaged in the collaborative process. Some boards and their partners employed staff with industry-specific knowledge to better understand and communicate with employers. In other initiatives, boards and partners gained employers’ confidence in the collaboration by tailoring services such as jobseeker assessment and screening services to address specific employers’ needs. For example, a sector-based center in Chicago, Illinois, worked closely with employers to review and validate employers’ own assessment tools, or develop new ones, and administer them on behalf of the employers, which saved employers time in the hiring process. Boards and their partners also strengthened collaborative ties with employers by making training services more relevant and useful to them. In some cases, employers provided direct input into training curricula. For example, in Wichita, Kansas, employers from the aviation industry worked closely with education partners to develop a training curriculum that met industry needs and integrated new research findings on composite materials. Another way that some initiatives met employers’ training needs was to provide instruction that led to industry-recognized credentials. For example, in San Bernardino, a training provider integrated an industry-recognized credential in metalworking into its training program to make it more relevant for employers. Boards also made efforts to minimize administrative burden for employers and other partners. In some cases, boards and their partners streamlined data collection or developed shared data systems to enhance efficiency. For example, in Cincinnati, Ohio, the partners developed a shared data system to more efficiently track participants, services received, and outcomes achieved across multiple workforce providers in the region. Finally, partners remained engaged in these collaborative efforts because they continued to produce a range of results for employers, jobseekers and workers, and the workforce system and other partners, such as education and training providers. For employers, the partnerships produced diverse results that generally addressed their need for critical skills in various ways. In some cases, employers said the initiatives helped reduce their recruitment and retention costs. For example, in Cincinnati, according to an independent study, employers who participated in the health care initiative realized about $4,900 in cost savings per worker hired. For jobseekers and workers, the partnerships produced results that mainly reflected job placement and skill attainment. For example, in Wichita, of the 1,195 workers who were trained in the use of composite materials in aircraft manufacturing, 1,008 had found jobs in this field. For the workforce system, the partnerships led to various results, such as increased participation by employers in the workforce system, greater efficiencies, and models of collaboration that could be replicated. Specifically, officials with several initiatives said they had generated repeat employer business or that the number and quality of employers’ job listings had increased, allowing the workforce system to better serve jobseekers. While these boards were successful in their efforts, they cited some challenges to collaboration that they needed to overcome. Some boards were challenged to develop comprehensive strategies to address diverse employer needs with WIA funds. WIA prioritizes funding for intensive services and training for low-income individuals when funding for adult employment and training activities is limited. The director of one board said that pursuing comprehensive strategies for an entire economic sector can be challenging, because WIA funds are typically used for lower-skilled workers, and employers in the region wanted to attract a mix of lower- and higher-skilled workers. To address this challenge, the director noted that the board used a combination of WIA and other funds to address employers’ needs for a range of workers. Additionally, some boards’ staff said that while their initiatives sought to meet employer needs for skill upgrades among their existing workers, WIA funds can be used to train current workers only in limited circumstances, and the boards used other funding sources to do so. Among the initiatives that served such workers, the most common funding sources were employer contributions and state funds. In addition, staff from most, but not all, boards also said that WIA performance measures do not directly reflect their efforts to engage employers. Many of these boards used their own measures to assess their services to employers, such as the number of new employers served each year, the hiring rate for jobseekers they refer to employers, the interview-to-hire ratio from initiative jobseeker referrals, the retention rate of initiative-referred hires, the number of businesses returning for services, and employer satisfaction. In order to support local collaborations like these, Labor has conducted webinars and issued guidance on pertinent topics, and has also collaborated with other federal agencies in efforts that could help support local collaboration. For example, Labor is working with the Department of Education and other federal agencies to identify existing industry- recognized credentials and relevant research projects, and has issued guidance to help boards increase credential attainment among workforce program participants. In addition, Labor has recently worked with Commerce and the Small Business Administration to fund a new discretionary $37 million grant program called the Jobs and Innovation Accelerator Challenge to encourage collaboration and leveraging funds. Specifically, this program encourages the development of industry clusters, which are networks of interconnected firms and supporting institutions that can help a region create jobs. A total of 16 federal agencies will provide technical resources to help leverage existing agency funding, including the 3 funding agencies listed above. While Labor has taken some steps to support local collaborations, it has not made information it has collected on effective practices for leveraging resources easily accessible, even though many of the boards we reviewed cited leveraging resources as a key to facilitating collaboration. For example, Labor maintains a website for sharing innovative state and local workforce practices called Workforce3One, which has some examples of leveraging funding at the local level. However, the website does not group these examples together in an easy to find location, as it does for other categories such as examples of innovative employer services or sector-based strategies. Moreover, although certain evaluations and other research reports have included information on leveraging resources,disseminated in one location. this information has not been compiled and In conclusion, at a time when the nation continues to face high unemployment, it is particularly important to consider ways to better connect the workforce investment system with employers to meet local labor market needs. The 14 local initiatives that we reviewed illustrate how workforce boards collaborated with partners to help employers meet their needs and yielded results: critical skill needs were met, individuals obtained or upgraded their skills, and the local system of workforce programs was reinvigorated by increased employer participation. Labor has taken several important steps that support local initiatives like the ones we reviewed through guidance and technical assistance, and through collaborative efforts with other federal agencies. However, while Labor has also collected relevant information on effective strategies that local boards and partners have used to leverage resources, it has not compiled this information or made it readily accessible. As the workforce system and its partners face increasingly constrained resources, it will be important for local boards to have at their disposal information on how boards have effectively leveraged funding sources. In our report, we recommended that Labor compile information on workforce boards that effectively leverage WIA funds with other funding sources and disseminate this information in a readily accessible manner. In its comments on our draft report, Labor agreed with our recommendation and noted its plans to implement it. This concludes my prepared statement. I would be happy to answer any questions that you or others members of the subcommittee may have. For further information regarding this testimony, please contact Andrew Sherrill (202-512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony include Laura Heald (Assistant Director), Chris Morehouse, Jessica Botsford, Jean McSween, and David Chrisinger. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | This testimony discusses collaboration between workforce boards, employers, and others. As the United States continues to face high unemployment in the wake of the recent recession, federally funded workforce programs can play an important role in bridging gaps between the skills present in the workforce and the skills needed for available jobs. However, there is growing recognition that these programs need to better collaborate with employers to align services and training with employers needs. The Workforce Investment Act of 1998 (WIA) envisioned such collaboration by focusing on employers as well as jobseekers, establishing a dual customer approach. To create a single, comprehensive workforce investment system, WIA required that 16 programs administered by four federal agenciesthe Departments of Labor (Labor), Education, Health and Human Services, and Housing and Urban Developmentprovide access to their services through local one-stop centers, where jobseekers, workers, and employers can find assistance at a single location. In addition, WIA sought to align federally funded workforce programs more closely with local labor market needs by establishing local workforce investment boards to develop policy and oversee service delivery for local areas within a state and required that local business representatives constitute the majority membership on these boards. Today, about 600 local workforce boards oversee the service delivery efforts of about 1,800 one-stop centers that provide access to all required programs. Despite the vision of collaboration between local employers and the workforce investment system, we and others have found that collaboration can be challenging. For example, in previous reports, we found that some employers have limited interaction with or knowledge of this system and that employers who do use the one-stop centers mainly do so to fill their needs for low-skilled workers. This testimony is based on our report, which was released yesterday, entitled Workforce Investment Act: Innovative Collaborations between Workforce Boards and Employers Helped Meet Local Needs. Workforce board officials and their partners in the 14 initiatives cited a range of factors that facilitated building innovative collaborations. Almost all of the collaborations grew out of efforts to address urgent workforce needs of multiple employers in a specific sector, rather than focusing on individual employers. The partners in these initiatives made extra effort to engage employers so they could tailor services such as jobseeker assessment, screening, and training to address specific employer needs. In all the initiatives, partners remained engaged in these collaborations because they continued to produce a wide range of reported results, such as an increased supply of skilled labor, job placements, reduced employer recruitment and turnover costs, and averted layoffs. While these boards were successful in their efforts, they cited some challenges to collaboration that they needed to overcome. Some boards were challenged to develop comprehensive strategies to address diverse employer needs with WIA funds. For example, some boards staff said that while their initiatives sought to meet employer needs for higher-skilled workers through skill upgrades, WIA funds can be used to train current workers only in limited circumstances, and the boards used other funding sources to do so. Staff from most, but not all, boards also said that WIA performance measures do not reflect their efforts to engage employers, and many boards used their own measures to assess their services to employers. Labor has taken various steps to support local collaborations, such as conducting webinars and issuing guidance on pertinent topics, and contributing to a new federal grant program to facilitate innovative regional collaborations. Yet, while many boards cited leveraging resources as a key to facilitating collaboration, Labor has not compiled pertinent information on effective practices for leveraging resources and made it easy to access. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Employer-sponsored pensions fall into two major categories: defined benefit (DB) and defined contribution (DC) plans. In DB, or traditional, plans, benefits are typically set by formula, with workers receiving benefits upon retirement based on the number of years worked for a firm and earnings in years prior to retirement. In DC plans, workers accumulate savings through contributions to an individual account. These accounts are tax-advantaged in that contributions are typically excluded from current income, and earnings on balances grow tax-deferred until they are withdrawn. An employer may also make contributions, either by matching employee’s contributions up to plan or legal limits, or on a non- contingent basis. Like DB plans, DC plans operate in a voluntary system with tax incentives for employers to offer a plan and for employees to participate. Contributions to and earnings on DC plan accounts are not taxed until the participant withdraws the money, although participants making withdrawals prior to age 59 ½ may incur an additional 10 percent tax. In 2006, the pension tax expenditure for DC plans amounted to $54 billion. In addition, a nonrefundable tax credit to qualifying low-and middle-income workers who make contributions, the saver’s credit, accounted for less than 2 percent of the 2006 tax expenditure on account-based retirement plans. DC plans offer workers more control over their retirement asset management, but also shift some of the responsibility and certain risks onto workers. Workers generally must elect to participate in a plan and make regular contributions into their plans over their careers. Participants typically choose how to invest plan assets from a range of options provided under their plan, and accordingly face investment risk. Savings in DC plans are portable in the sense that a participant may keep plan balances in a tax-protected account upon leaving a job, either by rolling over plan balances into a new plan or an IRA, or in some cases leaving money in an old plan. Workers may have access to plan savings prior to retirement, either through loans or withdrawals; participants may find such features desirable, but pre-retirement access may also lead to lower retirement savings (sometimes referred to as leakage) and possible tax penalties. Workers who receive DC distributions in lump-sum form must manage account withdrawals such that their savings last throughout retirement. In contrast, a formula, often based on preretirement average pay and years of service, determines DB plan benefits, and workers are usually automatically enrolled in a plan. The employer has the responsibility to ensure that the plan has sufficient funding to pay promised benefits, although the sponsor can choose to terminate the plan. DB plans also typically offer the option to take benefits as a lifetime annuity, or periodic benefits until death. An annuity provides longevity insurance against outliving one’s savings, but may lose purchasing power if benefits do not rise with inflation. Table 1 summarizes some of the primary differences between DC and DB plans. Over the past 25 years, DC plans have become the dominant type of private sector employee pension. In 1980, private DB plans had 38 million participants, while DC plans had 20 million. As of 2004, 64.6 million participants had DC plans, while 41.7 million had DB plans. Further, over 80 percent of private sector DC participants in 2004 were active participants (in a plan with their current employer), while about half of DB participants had separated from their sponsoring employer or retired. According to the Employee Benefit Research Institute (EBRI), while overall pension coverage among families remained around 40 percent between 1992 and 2001, 38 percent of families with a pension relied exclusively on a DC plan for retirement coverage in 1992, while 62 percent had a DB plan. In 2001, 58 percent of pension-participating families had only a DC plan, while 42 percent had a DB plan. Assets in all DB plans exceeded total DC assets as recently as 1995. As of 2006, DC plans had almost $3.3 trillion in assets while DB plans had almost $2.3 trillion. In addition, assets in IRAs, accounts that are also tax protected and include assets from rolled-over balances from employer-sponsored plans, measured over $4.2 trillion in 2006. There are several different categories of DC plans. Most of these plans are types of cash or deferred arrangements (CODA), in which employees can direct pre-tax dollars, along with any employer contributions, into an account, with assets growing tax deferred until withdrawal. The 401(k) plan is the most common, covering over 85 percent of active DC participants. Certain types of tax-exempt employers may offer plans, such as 403(b) or 457 plans, which have many features similar to 401(k) plans. Many employers match employee contributions, generally based on a specified percentage of the employee’s salary and the rate at which the participant contributes. Small business owners may offer employees a Savings Incentive Match Plan for Employees of Small Employers (SIMPLE) or a Simplified Employee Pension Plan (SEP), two types of DC plans that have reduced regulatory requirements for sponsors. Other types of DC plans keep the basic individual account structure of the 401(k), but with different requirements and employer practices. Some are designed primarily for employer contributions. These include money purchase plans, which specify fixed annual employer contributions; profit sharing plans, in which the employer decides annual contributions, perhaps based on profits, into the plan, and allocations of these to each participant; and employee stock ownership plans (ESOPs), in which contributions are primarily invested in company stock. Building up retirement savings in DC plans rests on factors that are, to some degree, outside of the control of the individual worker, as well as behaviors an individual does control (see fig. 1). Factors outside the individual’s direct control include the following: Plan sponsorship—the employer’s decision to sponsor a plan, as well as participation eligibility rules. Employer contributions—whether the sponsor makes matching or noncontingent contributions. Investment options—the plan sponsor’s decisions about investment options to offer to participants under the plan. Market returns on plan assets—market performance of plan assets. Key individual decisions and behaviors that may affect retirement savings include the following: Employee contributions—deposits into the plan account, typically out of current wages. Investment decisions—how to invest plan assets given investment options offered under the plan. balances, which usually incur a tax penalty. Similarly, taking out a loan from a plan, if allowed, may reduce future balances if the loan is not repaid in full and treated as a withdrawal, or by lowering investment returns. Rollover—upon separation from a job, a participant may transfer the plan account balance to an IRA, which maintains most of the same tax preferences on the balances, move it to a new tax-qualified plan, or leave the money in the old plan. Alternatively, any cash withdrawal would likely be subject to income tax and penalties. Age at retirement—the decision as to when to retire determines how many years the worker has to accumulate plan balances and how long the money has to last in retirement. There is little consensus about how much constitutes “enough” savings to have going into retirement. We may define retirement income adequacy relative to a standard of minimum needs, such as the poverty rate, or to the consumption spending that households experienced during working years. Some economists and financial advisors consider retirement income adequate if the ratio of retirement income to pre-retirement income—or replacement rate—is between 65 and 85 percent. Retirees may not need 100 percent of pre-retirement income to maintain living standards for several reasons. Retirees will no longer need to save for retirement, retirees’ payroll and income tax liability will likely fall, work expenses will no longer be required, and mortgages and children’s education and other costs may have been paid off. However, some researchers cite uncertainties about future health care costs and future Social Security benefit levels as reasons to suggest that a higher replacement rate, perhaps above 100 percent or higher, would be considered adequate. To achieve adequate replacement rate levels, retirees depend on different sources of income to support themselves in retirement. Social Security benefits provide the bulk of retirement benefits for most households. As of 2004, annuitized pension benefits provided almost 20 percent of total income to households with someone age 65 or older, while Social Security benefits provided 39 percent. Social Security benefits compose over 50 percent of total income for two-thirds of households with someone age 65 or older, and at least 90 percent of income for one-third of such households. Table 2 shows estimated replacement rates from Social Security benefits for low and high earners retiring in 2007 and 2055, as well as the remaining amount of pre-retirement income necessary to achieve a 75 percent replacement rate. These figures give rough guidelines for how much retirement income workers might need from other sources, such as employer-sponsored pensions, as well as earnings and income from other savings or assets. It is important to keep certain economic principles in mind when evaluating the effectiveness of retirement accounts, or any pensions, in providing retirement income security. First, balances accumulated in a DC plan may not represent new saving; individuals may have saved in another type of account in the absence of a DC plan or its tax preferences. Second, evaluating worker income security should consider total compensation, not just employer contributions to DC plans. All else equal, we should generally expect more generous employer-sponsored pension benefits to lower cash wages and that the split between current wages and deferred compensation is largely a reflection of labor market conditions, tax provisions, and worker and employer preferences. Many workers do not have DC plans, and median savings levels among participants show modest balances. While it is worth noting that for workers nearing retirement age, DC plans were not considered primary pension plans for a significant portion of their working careers, participation rates and median balances in such plans are low across all ages. Only 36 percent of working individuals were actively participating in a DC plan, according to data from the 2004 SCF. Further, workers aged 55 to 64 had median balances totaling $50,000 in account-based retirement savings vehicles, including DC plans and rollover accounts. Leakage, when workers withdraw DC savings before retirement age, can also reduce balances; almost half of those taking lump-sum distributions upon leaving a job reported cashing out their balances for non-retirement purposes. Participation among lower-income workers was particularly limited, and those who did have accounts had very low balances. The majority of workers, in all age groups, are not participating in DC plans with their current employers. Employers do not always offer retirement plans, and when they do, plans may have eligibility restrictions initially, and some eligible workers do not choose to participate. According to our analysis of the 2004 SCF, only 62 percent of workers were offered a retirement plan by their employer, and 84 percent of those offered a retirement plan participated. Only 36 percent of working individuals participated in a DC plan with their current employer (see fig. 2). Data indicated similar participation rates for working households, as 42 percent of households had at least one member with a current DC plan. For many workers who participated in a plan, overall balances in DC plans were modest, suggesting a potentially small contribution toward retirement security for most plan participants and their households. However, since DC plans were less common before the 1980s, older workers would not have had access to these plans their whole careers. In order to approximate lifetime DC balances when discussing mean and median DC balances in this report, our analysis of the 2004 SCF aggregates the “total balances” of DC plans with a current employer, DC plans with former employers that have been left with the former employer, and any retirement plans with former employers that have been rolled over into a new plan or an IRA. Workers with a “current or former DC plan” refers to current workers with one or more of those three components. For all workers with a current or former DC plan, the median total balance was $22,800. For all households with a current or former DC plan, the median total balance was $27,940 (see fig. 3). For individuals nearing retirement age, total DC plan balances are still low. Given trends in coverage since the 1980’s, older workers close to retirement age are more likely than younger ones to have accrued retirement benefits in a DB plan. However, older workers who will rely on DC plans for retirement income may not have time to substantially increase their total savings without extending their working careers, perhaps for several years. Among all workers aged 55 to 64 with a current or former DC plan, the median balance according to the 2004 SCF was $50,000, which would provide an income of about $4,400 a year, replacing about 9 percent of income for the average worker in this group. Among all workers aged 60 to 64 with a current or former DC plan, the median balance was $60,600 for their accounts. Markedly higher values for mean balances versus median balances in figure 3 illustrate that some individuals in every age group are successfully saving far more than the typical individual, increasing the mean savings. These are primarily individuals at the highest levels of income. Leakage, or cashing out accumulated retirement savings for non- retirement purposes, adversely affects account accumulation for some of those with accounts, particularly for lower-income workers with small account balances. Participants who withdraw money from a DC plan before age 59 ½ generally pay ordinary income taxes on the distributions, plus an additional 10 percent tax in most circumstances. Participants may roll their DC plan balances into another tax-preferred account when they leave a job, and employers are required, in the absence of participant direction, to automatically roll DC account distributions greater than $1,000 but not greater than $5,000 into an IRA, or to leave the money in the plan. As of 2004, 21 percent of households in which the head of household was under 59, had ever received lump-sum distributions from previous jobs’ retirement plans. Among these households that received lump-sum distributions, 47 percent had cashed out all the funds, 4 percent cashed out some of the funds, and 50 percent preserved all the funds by rolling them over into another retirement account. Workers were more likely to roll over funds when the balances are greater. Among households that had cashed out all retirement plans with former employers, the median total value of those funds was $6,800. For households that had rolled over all retirement plans with former employers, the median total value of rolled- over funds was $24,200. Some evidence suggests that pre-retirement withdrawals may be decreasing. One study finds that those receiving lump-sum distributions are more likely to preserve funds in tax-qualified accounts than they were in the past. For example, data show that in 1993, 19 percent of lump-sum distributions recipients preserved all of their savings by rolling them into a tax-qualified account, compared to 43 percent in 2003. Further, 23 percent used all of their distribution for consumption in 1993, declining to 15 percent in 2003 (see fig. 4). According to the same study, age and size of the distribution are major determinants of whether or not the distribution is preserved in a tax-qualified account. For example, the authors found 55.5 percent of recipients aged 51 to 60 rolled their entire distribution in a tax-qualified account compared with 32.7 percent of recipients 21 to 30. Additionally, 19.9 percent of distributions from $1 to $499 were rolled over in tax-qualified accounts, as opposed to 68.1 percent of distributions of $50,000 or more. Additionally, some participants take loans from their DC plan, which may reduce plan savings. One survey found that in 2005, 85.2 percent of employers surveyed offered a loan option. Most eligible participants do not take loans, and one analysis finds that at the year end 2006, loans amounted to 12 percent of account balances for those who had loans. Individuals may prefer to take out pension loans in lieu of other lines of credit because pension loans require no approval and have low or no transaction costs. Borrowers also pay the loan principal and interest back to their own accounts. However, someone borrowing from a DC plan may still lose money if the interest on the loan paid back to the account is less than the account balance would have earned if the loan had not been taken. Further, loans not paid back in time, or not paid back before the employee leaves the job, may be subject to early withdrawal penalties. No data have been reported on the rate of loan defaults, but it is expected to be much lower where repayments are made by payroll withholding. However, a loan feature may also have a positive effect on participation, as some workers may choose to participate who otherwise might not, precisely because they can borrow from their accounts for non-retirement purposes at relatively low interest rates. Among workers in the lowest income quartile, only 8 percent participated in a current DC plan, a result of markedly lower access as well as lower participation than the average worker (see fig. 5). Only 25 percent of workers in the lowest income quartile were offered any type of retirement plan by their employer, and among those offered a retirement plan, 60 percent elected to participate, compared with 84 percent among workers of all income levels. Workers in the lower half of the income distribution with either current or former DC plans had total median balances of $9,420. Older workers who were less wealthy also had limited retirement savings. Workers with a current or former DC plan, aged 50-59 and at or below the median level of wealth, had median total savings of only $13,800. Workers with a current or former DC plan, aged 60-64 and at or below the median level of wealth, had median total savings of $18,000, a level that could provide at best only a limited supplement to retirement income. If converted into a single life annuity at age 65, this balance would provide only $132 per month—about $1,600 per year. Notably, workers with low DC balances were actually less likely to have a DB pension to fall back on than workers with higher DC balances. Among all workers participating in current or former DC plans, only 17 percent of those in the bottom quartile for total plan savings also were covered by a current DB plan. In contrast, 32 percent of those in the top quartile for total DC savings also had DB coverage. Among all workers with a current or former DC plan, the plan balances for those with DB coverage were higher than for those without DB coverage. The median DC balance for workers with a DB account was $31,560, while the median DC balance for someone without a DB account was $20,820. Simulations of projected retirement savings in DC plans suggest that a large percentage of workers may accumulate enough over their careers to replace only a small fraction of their working income, although results vary widely by income levels and depend on model assumptions. Projected savings allow us to analyze how much workers might save over a full working career under a variety of conditions in a way that analyzing current plan balances cannot, since DC plans have become primary employer-sponsored plans only relatively recently. Baseline simulations of projected retirement savings for a hypothetical 1990 birth cohort indicate that DC plan savings would on average replace about 22 percent of annualized career earnings, but provide no savings to almost 37 percent of the working population, perhaps because of different factors — working for employers who do not offer a plan, choosing not to participate, or withdrawing any accumulated plan savings prior to retirement. Further, projected DC account balances vary widely by income quartile, with workers in the lowest-income quartile saving enough for about a 10 percent replacement rate, while those in the highest quartile saving enough for a 34 percent replacement rate, on average. Assuming changes in certain plan features, individual behavior, or market assumptions, such as increased participation or account rollover rates, increased projected average savings and increased the number of workers who had some DC plan savings at retirement, especially for low-income workers. Other scenarios, such as assuming higher contribution limits or delaying retirement, raised average replacement rates, but with more of the positive impact on higher-income workers and having little effect on reducing the number of workers with no savings at retirement. Our projections, based on a sample of workers born in 1990, show that workers would save enough in their DC plans over their careers to produce, when converted to a lifetime annuity at the time of retirement, an average of $18,784 per year in 2007 dollars (see table 3). The projections assume that all workers fully annuitize all accumulated DC plans balances at retirement, which occurs sometime between age 62 and 70. Participants are assumed to always invest all plan assets in life cycle funds, and stocks earn an average real annual return of 6.4 percent. This $18,784 annuity would replace, on average, 22.2 percent of annualized career earnings for workers in the cohort. Savings and replacement rates vary widely across income groups. Almost 37 percent of workers in this cohort have no projected DC plan savings at retirement, which brings down overall average replacement rates. Workers in the lowest income quartile accumulate DC plan savings equivalent to an annuity of about $1,850 per year, or a 10.3 percent replacement rate, and 63 percent of this group have no plan savings by the time they retire. In contrast, highest income quartile workers save enough to receive about $50,000 per year in annuity income, enough for a 33.8 percent replacement rate. Even in this highest-income group, over 16 percent of workers have zero plan savings at retirement. In all cases, our replacement rates include projected savings only in DC plans. Retirees may also receive benefits from DB plans, as well as from Social Security, which typically replaces a higher percentage of earnings for lower-income workers. Projected household-level plan savings show a higher average replacement rate of 33.8 percent, with about 29 percent of households having no plan savings at retirement. When we assume that plan assets earn a lower average real annual return of 2.9 percent, average replacement rates from DC plan savings fall to about 16 percent for the sample. Under this assumption, workers in the lowest-income quartile receive an average 7.1 percent replacement rate from DC plans, while highest-income quartile workers receive an average 25 percent replacement rate. Lower rates of return affect the percentage of workers with no accumulated DC plan savings only slightly, perhaps because on the margins some participants might choose (or have their employers choose) to cash out lower balances. Table 3 also shows savings statistics for sub-samples of the cohort who have a better chance of accumulating significant DC plan savings, such as those workers who have long-term eligibility to participate in a plan or who work for many years. As expected, these groups have higher projected savings; replacement rates also show more even distribution across income groups, compared to those in the full sample. However, we still see a significant portion of the workers with no DC savings at retirement. First, we limit the sample only to those workers who are eligible to participate in a plan for at least 15 years over their careers. Average replacement rates for this group measure 33.5 percent, with rates ranging from 21.7 percent for lowest income quartile workers to 42.3 percent for the highest quartile. Even with such long-term eligibility for plan coverage, however, 15.6 percent of these workers, and almost one- third of lowest-income workers, have nothing saved in DC plans at the time they retire. This could result from workers choosing not to participate or from cashing out plan balances prior to retirement. We also analyze the prospects of workers with long-term attachment to the labor market, for which we use people who work full-time for at least 25 years, without regard to plan coverage or participation. Among these workers, average DC plan savings at retirement account for a 26.5 percent replacement rate. Still, almost 29 percent of these workers have no projected savings. This suggests that while DC plans have the potential to provide significant retirement income, saving may be difficult for some workers who work for many years, even among those whose employers offer a plan. Our simulations indicate that increasing participation and reducing leakage out of DC plans may have a particularly significant impact on overall savings, especially for lower-income workers. Of the changes in the model assumptions that we simulated, these had the broadest effect on savings because they not only raised average savings for the entire sample, but had a relatively strong impact on workers in the lowest income quartile and on the number of workers with no DC plan savings at retirement. While these assumptions represent stylized scenarios, they illustrate the potential effect of such changes on savings. We project DC plan savings assuming that all employees of a firm that sponsors a DC plan participate immediately, rather than having to wait for eligibility or choosing not to participate. In our baseline projections, 6 percent of workers whose employers sponsor a plan are ineligible to participate, and 33 percent of those eligible do not choose to participate; therefore, this assumption significantly raises plan participation rates among workers. Accordingly, average DC savings rise by almost 40 percent, raising average replacement rates to 35 percent, and the percentage of the population with no savings at retirement drops by half, down to 17.7 percent (see table 4). Assuming automatic eligibility and participation raises projected plan savings significantly for lower-wage workers, more than doubling the annuity equivalent of retirement savings for the lowest-income quartile. Workers in the highest income group also increase savings under this scenario, with plan savings rising by 30 percent. This change in projected savings suggests that automatically enrolling new employees in plans as a default could have a significant positive impact on DC balances, especially for low-income workers whose jobs offer a plan, although this stylized scenario likely describes a more extreme change in eligibility and participation than plans are likely to implement under automatic enrollment, and that higher participation and savings would raise employer’s pension costs, perhaps leading to a reduction in benefits or coverage. Another stylized scenario we model assumes that all workers who have a DC plan balance always keep the money in a tax-preferred account upon leaving a job, either by keeping the money in the plan, transferring it to a new employer plan, or rolling it into an IRA, rather than cashing out any accumulated savings. Eliminating this source of leakage raises average annuity income from DC plans by almost 11 percent and average replacement rates from 22.2 percent in the baseline to 25.6 percent; it also reduces the percentage of the cohort with no DC savings at retirement by over 25 percent. As with the instant participation scenario, “universal rollover” raises annuity savings and reduces the number of retirees with zero plan savings by the biggest percentages among lower-income workers, suggesting that cashing out accumulate plan savings prior to retirement may be a more significant drain on retirement savings for these groups. These results indicate that policies to encourage participants to keep DC plan balances in tax-preferred retirement accounts, perhaps by making rollover of plan assets a default action in plans, may have a broad positive impact on retirement savings. Other changes we make in our projections related to plan features or individual behavior affect average replacement rates overall, but with less impact on lower-income workers’ replacement rates and on the number of workers with zero plan savings at retirement. These scenarios include assumed changes in annual contribution limits and retirement decisions (see table 5). We model projected retirement savings assuming that annual DC contribution limits for employees rise from $15,500 to $25,000, and the combined employer-employee maximum contribution level rises from $45,000 to $60,000, starting in 2007. Higher annual maximum contributions affect projected savings almost exclusively among the highest-income group, indicating that few workers earning less are likely to contribute at existing maximum levels. The highest income quartile replacement rises from 33.8 to 38.5 percent, while replacement rates hardly change in the lower income groups. Similarly, this scenario has almost no impact on the percentage of workers with DC plan savings at retirement. Finally, we model retirement savings in two scenarios in which workers delay retirement by 1 or 3 years. Encouraging workers to retire later has been suggested as a key element in improving retirement income security, by increasing earnings, allowing more time to save for retirement, and reducing the length of retirement. In our projections, delaying retirement not only provides more years to contribute to and earn returns on plan balances but also might raise annual retirement income because older retirees receive more annuity income for any given level of savings, holding all else equal. In our projections, working longer modestly raises retirement savings in our projections. Working one extra year changes projected annuity income by 5.8 percent, but has little effect on the percentage of people with no DC savings in our projections. Delaying retirement by 3 years raises annuity income from DC plans by 20.9 percent on average, with replacement rates rising from 22.2 percent in the baseline to 25.7 percent overall. The 3-year delay increases annuity levels somewhat evenly across income groups, with higher-income workers showing slightly higher increases. Overall, working an extra 3 years raises average replacement rates about as much as universal account rollover would, but with little reduction in workers with no retirement savings. Thus, while working longer would likely raise workers’ incomes, and in most cases retirement benefits from other sources such as Social Security, our projections show that this change alone would have a modest impact on retirement income from DC plans, particularly regarding lower-income workers and those not already saving in DC plans in the baseline. Recent regulatory and legislative changes and proposals could have positive effects on DC plan coverage, participation, and saving. The Pension Protection Act of 2006 (PPA) facilitated the adoption of automatic plan features by plan sponsors that may increase DC participation and savings within existing plans. Proposals to expand the saver’s credit could similarly encourage greater contributions by low-wage workers who are already covered by a DC plan. Other options, like the so-called “State- K” proposal, in which states would design and partner with private financial institutions to offer low-cost DC plans employers could provide to employees, would seek to expand coverage among workers without current plans by encouraging employers to sponsor new plans. Other options would try to increase retirement account coverage by increasing the use of IRAs or creating new retirement savings vehicles outside of the voluntary employer-sponsored pension framework. Such proposals include automatic IRAs, in which employers would be required to allow employees through automatic enrollment to contribute to IRAs by direct payroll deposit, or universal accounts proposals, in which all workers would be given a retirement account regardless of whether they had any employment based pension coverage. Changing certain traditional DC plan defaults may have a significant impact on DC participation and savings. Research suggests that employees exhibit inertia regarding plan participation and contributions, which can reduce DC savings by failure to participate or increase savings over time. To reverse the effects of these tendencies, some experts have suggested changing default plan actions to automatically sign up employees for participation, escalate contributions, and set default investment options unless workers opt out. Some studies have shown that automatic enrollment may increase DC plan participation. For example, one study of a large firm, automatic enrollment increased participation from 57 percent for employees eligible to participate 1 year before the firm adopted automatic enrollment to 86 percent for those hired under automatic enrollment. Another study finds that, prior to automatic enrollment, 26 to 43 percent of employees at 6 months’ tenure participated in the plan at three different companies; under automatic enrollment 86 to 96 percent of employees participated. Some also advocate automatically rolling over DC savings into an IRA when employees separate from their employers to further increase retirement savings. Our own simulations shows that universal account rollover to a tax-preferred account, such as a new plan or an IRA, would increase projected retirement savings by 11 percent on average, with the biggest percentage increases for lowest-income workers. Various regulatory and legislative changes have focused on default DC plan features. In 1998, the IRS first approved plan sponsor use of automatic enrollment—the ability for plans to automatically sign employees up for a 401(k) plan (from which the employee can opt out), and—subsequently issued several rulings that clarified the use of other automatic plan features and the permissibility of automatic features in 403(b) and 457 plans. Accordingly, the percentage of 401(k) plans using automatic plan features has increased in recent years. One annual study of plan sponsors found that in 2004, 12.4 percent of 401(k) plans were automatically enrolling participants, and this number increased to 17.5 percent of plans in 2005. The percentage of plans automatically increasing employee contributions also rose from 6.8 percent in 2004 to 13.6 percent in 2005. Some experts have argued that initially, some plan sponsors may have been hesitant to use automatic plan features because of legal ambiguities between state and federal law. However, clarifications relating to automatic enrollment and default investment in the PPA have led some plan sponsors and experts to expect more plans to adopt automatic plan features. Automatic DC plan features, however, may create complications for sponsors and participants that may limit any effect on savings and participation. Auto enrollment may not help expand plan sponsorship; in fact, sponsors who offer a matching contribution may not want to offer automatic enrollment if they believe this will raise their pension costs. Also, if sponsors automatically invest contributions in a low-risk fund such as a money market fund, this could limit rates of return on balances. However, choosing a risky investment fund could subject automatic contributions to market losses. Some employees may not realize they have been signed up for a plan, and may be displeased to discover this, particularly if their automatically invested contributions have lost money. Other proposals would target plan formation or increase participation and retirement savings by expanding worker access to other account-based retirement savings vehicles like IRAs. Some of these alternative retirement savings proposals are voluntary in design, while others are more universal. Gen. Assem., Reg. Sess. (Md. 2006). employee access to account-based retirement plans. However, it is unclear to what extent employers would adopt such plans. The Automatic IRA: The Automatic IRA proposal would make direct deposit or payroll deduction saving into an IRA available to all employees by requiring employers that do not sponsor any retirement plan to offer withholding to facilitate employee contributions. To maximize participation, employees would be automatically enrolled at 3 percent of pay, or could elect to opt out or to defer a different percentage of pay to an IRA, up to the maximum IRA annual contribution limit ($4,000 for 2007; $5,000 for 2008). Employers would not be required choose investments or set up the IRAs, which would be provided mainly by the private-sector IRA trustees and custodians that currently provide them. Employers also would not be required or permitted to make matching contributions, and would not need to comply with the Employee Retirement Income Security Act of 1974 (ERISA) or any qualified plan standards such as non- discrimination requirements. Employers, however, would be required to provide notice to employees, including information on the maximum amount that can be contributed to the plan on an annual basis. One congressional proposal would require employers, other than small or new ones, to offer payroll deposit IRA arrangements to employees not eligible for pension plans and permit automatic enrollment in such IRAs in many circumstances. Participating IRAs would be required to offer a default investment consisting of life cycle funds similar to those offered by the Thrift Savings Plan, the DC plan for federal workers, or other investments specified by a new entity established for that purpose. Universal accounts: Similar to the automatic IRA, universal account (UA) proposals aim to establish retirement savings accounts for all workers, and vary slightly based on employment-based pension access. Additionally, some proposals would have employers contribute to the account, whereas other proposals would also have the federal government match contributions. One proposal suggests a 2 percent annual contribution from the federal government regardless of individual contributions, while another would provide for individual contributions only, capped at $7,500 per year. In 1999, the Clinton Administration proposed a UA to be established for each worker and spouse with earnings of at least $5,000 annually. Individuals would receive a tax credit of up to $300 annually. Additionally, workers could voluntarily contribute to the account up to specified amounts with a 50 to 100 percent match by the federal government. This match would come in the form of a tax credit, and total voluntary contributions, including government contributions, would be limited to $1,000. Both the credit and the match would phase out as income increases, providing a progressive benefit and targeting low- and middle-income workers. Federal contributions would have revenue implications, while requiring employer contributions could increase employer compensation costs. Other proposals would expand the size and scope of the saver’s credit to encourage greater contributions by those low-wage workers who are already covered by a DC plan that allows employee contributions. Currently, the saver’s credit, originally proposed in 2000 as an outgrowth of the 1999 UA proposal as a government matching deposit on some voluntary contributions to IRAs and 401(k) plans, provides a nonrefundable tax credit to low- and middle-income savers of up to 50 percent of their annual IRA or 401(k) contributions of up to $2,000. However, according to one analysis, because the credit is nonrefundable, only about 17 percent of those with incomes low enough to qualify for the credit would receive any benefit if they contributed to a plan. Some analysts think that expanding the saver’s credit, or creating direct transfers such as tax rebates or deposits into retirement savings accounts, could increase plan contributions specifically for low- and middle-income workers. Making the saver’s credit refundable to the participant could also provide a direct transfer to the tax filer in lieu of a retirement account match, but offers no assurance that funds would be saved or deposited into a retirement account. A refundable tax credit would also have revenue implications for the federal budget. The DC plan has clearly overtaken the DB plan as the principal retirement plan for the nation’s private sector workforce, and its growing dominance suggests its increasingly crucial role in the retirement security of current and future generations of workers. The current DC-based system faces major challenges, like its DB-based predecessor, in terms of coverage, participation, and lifetime distributions. Achieving retirement security through DC plans carries particular challenges for workers, since accumulating benefits in an account-based plan requires more active commitment and management from individuals than it does for DB participants. Since workers must typically sign up and voluntarily reduce their take home pay to contribute to their DC plans, invest this money wisely over their working years, and resist withdrawing from balances prior to retirement, it is perhaps to be expected that even those who have the opportunity to participate save little. While our results on both current and projected plan balances suggest that while some workers save significant amounts toward their retirement in DC plans, a large proportion of workers will likely not save enough in DC plans for a secure retirement. Of particular concern are the retirement income challenges faced by lower earners. Many of these workers face competing income demands for basic necessities that may make contributions to their retirement plans difficult. Further, the tax preferences that may entice higher-income workers to contribute to their DC plans may not entice low-income workers who have plan coverage, since these workers face relatively low marginal tax rates. Our model results suggest that other measures, such as automatic enrollment and rollover of funds may make a difference for some lower income workers. Should pension policy, as embodied by the automatic provisions in PPA, continue to move in this direction, it should focus on those workers most in need of enhanced retirement income prospects. We provided a draft of this report to the Department of Labor and the Department of the Treasury, as well as to five outside reviewers. Neither agency provided formal comments. We incorporated any technical comments we received throughout the report as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution until 30 days after the date of this letter. At that time, we will send copies of this report to the Secretary of Labor, the Secretary of the Treasury, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you have any questions concerning this report, please contact Barbara Bovbjerg at (202) 512-7215. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To analyze saving in DC plans, we examined data from the Federal Reserve Board’s Survey of Consumer Finances (SCF). This triennial survey asks extensive questions about household income and wealth components. We used the latest available survey, from 2004. The SCF is widely used by the research community, is continually vetted by the Federal Reserve and users, and is considered to be a reliable data source. The SCF is believed by many to be the best source of publicly available information on household finances. Further information about our use of the SCF, including sampling errors, as well as definitions and assumptions we made in our analysis are detailed below. We also reviewed published statistics in articles by public policy groups and in academic studies. To analyze how much Americans can expect to save in DC plans over their careers and the factors that affect these savings, we used the Policy Simulation Group’s (PSG) microsimulation models to run various simulations of workers saving over a working career, changing various inputs to model different scenarios that affect savings at retirement. PENSIM is a pension policy simulation model that has been developed for the Department of Labor to analyze lifetime coverage and adequacy issues related to employer-sponsored pensions in the United States. We projected account balances at retirement for PENSIM-generated workers under different scenarios representing different pension features, individual behavioral decisions, and market assumptions. See below for further discussion of PENSIM and our assumptions and methodologies. To analyze those plan- or government-level policies that might best increase participation and savings in DC plans, we synthesized information gathered from interviews of plan practitioners, financial managers, and public policy experts, as well as from academic and policy studies on DC plan participation and savings. We also researched current government initiatives and policy proposals to broaden participation in account-based pension plans and increase retirement savings. We conducted our work from July 2006 to October 2007 in accordance with generally accepted government auditing standards. The 2004 SCF surveyed 4,522 households about their pensions, incomes, labor force participation, asset holdings and debts, use of financial services, and demographic information. The SCF is conducted using a dual-frame sample design. One part of the design is a standard, multi-stage area-probability design, while the second part is a special oversample of relatively wealthy households. This is done in order to accurately capture financial information about the population at large as well as characteristics specific to the relatively wealthy. The two parts of the sample are adjusted for sample nonresponse and combined using weights to provide a representation of households overall. In addition, the SCF excludes people included in the Forbes Magazine list of the 400 wealthiest people in the United States. Furthermore, the 2004 SCF dropped three observations from the public data set that had net worth at least equal to the minimum level needed to qualify for the Forbes list. The SCF is a probability sample based on random selections, so the 2004 SCF sample is only one of a large number of samples that might have been drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 4 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. All percentage estimates based on GAO analysis of 2004 SCF data used in this report have 95 percent confidence intervals that are within plus-or- minus 4 percentage points, with the following exceptions described in table 6 below. Other numerical estimates based on GAO analysis of 2004 SCF data used in this report have 95 percent confidence intervals that are within 25 percent of the estimate itself, with exceptions described in table 7. Because of the complexity of the SCF design and the need to suppress some detailed sample design information to maintain confidentiality of respondents, standard procedures for estimate of sampling errors could not be used. Further, the SCF uses multiple imputations to estimate responses to most survey questions to which respondents did not provide answers. Sampling error estimates for this report are based on a bootstrap technique using replicate weights to produce estimates of sampling error that account for both the variability due to sampling and due to imputation. The SCF collects detailed information about an economically dominant single individual or couple in a household (what the SCF calls a primary economic unit), where the individuals are at least 18 years old. We created an additional sample containing information on 7,471 individuals by separating information about respondents and their spouses or partners and considering them separately. When we discuss individuals in this document, we are referring to this sample. When we refer to all workers, we are referring to the subpopulation of workers within this individual sample. In households where there are additional adult workers, beyond the respondent and the spouse or partner, who may also have earnings and a retirement plan, information about these additional workers is not captured by the SCF and is therefore not part of our analysis. It is also important to note that the SCF was designed to be used as a household survey, and some information could not be broken into individual-level information. Where that was the case, we presented only household-level information. We defined “worker” relatively broadly and opted to begin with the set of all those who reported that they were both working and some other activity, including for example, “worker plus disabled” and “worker plus retired.” We then excluded those workers who reported that they were self-employed from our analysis. Our definition of DC plans includes the following plans: 401(k); 403(b); 457; thrift/savings plan; profit-sharing plan; portable cash option plan; deferred compensation plan, n.e.c.; SEP/SIMPLE; money purchase plan; stock purchase plan; and employee stock ownership plan (ESOP). The SCF and other surveys that are based on self-reported data are subject to several other sources of nonsampling error, including the inability to get information about all sample cases; difficulties of definition; differences in the interpretation of questions; respondents’ inability or unwillingness to provide correct information; and errors made in collecting, recording, coding, and processing data. These nonsampling errors can influence the accuracy of information presented in the report, although the magnitude of their effect is not known. Our analysis of the 2004 SCF yielded slightly lower participation rates than other data sets that consider pensions. For example, 2004 Bureau of Labor Statistics (BLS) data indicate a somewhat higher rate of active participation in DC accounts, 42 percent, compared with our finding of 36 percent. One possible factor contributing to this difference is that BLS surveys establishments about their employees, while SCF surveys individuals who report on themselves and their households; it is possible that the SCF respondents may be failing to report all retirement accounts, while BLS is capturing a greater proportion of them. Also, the SCF considered both public and private sector workers, while the BLS statistic is only for private sector workers. Differences may also be explained by different definitions of workers and participation, question wording, or lines of questioning. The SCF appears to provide a lower bound on the estimation of pension coverage among 4 major data sets. To project lifetime savings in DC pensions, and related retirement plans with personal accounts, and to identify the effects of changes in policies, market assumptions, or individual behavior, we used the Policy Simulation Group’s (PSG) Pension Simulator (PENSIM) microsimulation models. PENSIM is a dynamic microsimulation model that produces life histories for a sample of individuals born in the same year. The life history for a sample individual includes different life events, such as birth, schooling events, marriage and divorce, childbirth, immigration and emigration, disability onset and recovery, and death. In addition, a simulated life history includes a complete employment record for each individual, including each job’s starting date, job characteristics, pension coverage and plan characteristics, and ending date. The model has been developed by PSG since 1997 with funding and input by the Office of Policy and Research at the Employee Benefits Security Administration (EBSA) of the U.S. Department of Labor with the recommendations of the National Research Council panel on retirement income modeling. PENSIM sets the timing for each life event by using data from various longitudinal data sets to estimate a waiting-time model (often called a hazard function model) using standard survival analysis methods. PENSIM incorporates many such estimated waiting-time models into a single dynamic simulation model. This model can be used to simulate a synthetic sample of complete life histories. PENSIM employs continuous- time, discrete-event simulation techniques, such that life events do not have to occur at discrete intervals, such as annually on a person’s birthday. PENSIM also uses simulated data generated by another PSG simulation model, SSASIM, which produces simulated macro-demographic and macroeconomic variables. PENSIM imputes pension characteristics using a model estimated with 1996 to 1998 establishment data from the BLS Employee Benefits Survey (now known as the National Compensation Survey). Pension offerings are calibrated to historical trends in pension offerings from 1975 to 2005, including plan mix, types of plans, and employer matching. Further, PENSIM incorporates data from the 1996-1998 Employee Benefits Survey (EBS) to impute access to and participation rates in DC plans in which the employer makes no contribution, which BLS does not report as pension plans in the NCS. The inclusion of these “zero-matching” plans enhances PENSIM’s ability to accurately reflect the universe of pension plans offered by employers. PENSIM assumes that 2005 pension offerings, included the imputed zero-matching plans, are projected forward in time. PSG has conducted validation checks of PENSIM’s simulated life histories against both historical life history statistics and other projections. Different life history statistics have been validated against data from the Survey of Income and Program Participation (SIPP), the Current Population Survey (CPS), Modeling Income in the Near Term (MINT3), the Panel Study of Income Dynamics (PSID), and the Social Security Administration’s Trustees Report. PSG reports that PENSIM life histories have produced similar annual population, taxable earnings, and disability benefits for the years 2000 to 2080 as those produced by the Congressional Budget Office’s long-term social security model (CBOLT) and as shown in the Social Security Administration’s 2004 Trustees Report. According to PSG, PENSIM generates simulated DC plan participation rates and account balances that are similar to those observed in a variety of data sets. For example, measures of central tendency in the simulated distribution of DC account balances among employed individuals is similar to those produced by an analysis of the Employee Benefit Research Institute (EBRI)-Investment Company Institute (ICI) 401(k) database and of the 2004 SCF. GAO performed no independent validation checks of PENSIM’s life histories or pension characteristics. In 2006, EBSA submitted PENSIM to a peer review by three economists. The economists’ overall reviews ranged from highly favorable to highly critical. While the economist who gave PENSIM a favorable review expressed a “high degree of confidence” in the model, the one who criticized it focused on PENSIM’s reduced form modeling. This means that the model is grounded in previously observed statistical relationships among individuals’ characteristics, circumstances, and behaviors, rather than on any underlying theory of the determinants of behaviors, such as the common economic theory that individuals make rational choices as their preferences dictate and thereby maximize their own welfare. The third reviewer raised questions about specific modeling assumptions and possible overlooked indirect effects. PENSIM allows the user to alter one or more inputs to represent changes in government policy, market assumptions, or personal behavioral choices and analyze the subsequent impact on pension benefits. Starting with a 2 percent sample of a 1990 cohort, totaling 104,435 people at birth. our baseline simulation includes some of the following key assumptions and features. For our report, we focus exclusively on accumulated balances in DC plans and ignore any benefits an individual might receive from DB plans or from Social Security. Our reported benefits and replacement rates therefore capture just one source of potential income available to a retiree. Workers accumulate DC pension benefits from past jobs in one rollover account, which continue to receive investment returns, along with any benefits from a current job. At retirement, these are combined into one account. Because we focus on DC plan balances only, we assume all workers are ineligible to participate in DB plans and do not track Social Security benefits. Plan participants invest all assets in their account in life cycle funds, which adjust the mix of assets between stocks and government bonds as the individual ages. Stocks return an annual nonstochastic real rate of return of 6.4 percent and government bonds have a real return of 2.9 percent per year. In one simulation, we use the government bond rate on all plan assets. Using different rates of return reflect assumptions used by OCACT in some of its analyses of trust fund investment. Workers purchase a single, nominal life annuity, typically at retirement, which occurs between the ages of 62 and 70. Anyone who becomes permanently disabled at age 45 or older also purchases an immediate annuity at their disability age. We eliminate from the sample cohort members who: 1) die before they retire, at whatever age; 2) die prior to age 55; 3) immigrates into the cohort at an age older than 25; or 4) becomes permanently disabled prior to age 45. We assume that the annuity provider charges an administrative load on the annuity such that in all scenarios the provider’s revenues balance the annuity costs (i.e., zero profit). Replacement rates equal the annuity value of DC plan balances divided by a “steady earnings” index. This index reflects career earnings, calibrated to the Social Security Administration’s age-65 average wage index (AWI). PENSIM computes steady earnings by first computing the present value of lifetime wages. Then, it calculates a scaling factor that, when multiplied by the present value of lifetime earnings for a 1990 cohort member earning the AWI from ages 21 to 65, produces the individual’s present value of lifetime earnings. This scaling factor is multiplied by AWI at age 65, then adjusted to 2007 dollars. Using this measure as opposed to average pay for an individual’s final 3 or 5 years of working, minimizes the problems presented by a worker who has irregular earnings near the end of his or her career, perhaps because of reduced hours. For household replacement rates, we use a combined annuity value of worker-spouse lifetime DC plan savings and a combined measure of steady family earnings. Starting from this baseline model, we vary key inputs and assumptions to see how these variations affect pension benefits and replacement rates at retirement. Scenarios we ran include: (1) Universal rollover of DC plan balances. All workers with a DC balance roll it over into an Individual Retirement Account or another qualified plan upon job separation, as opposed to cashing out the balance, in which case the money is assumed lost for retirement purposes. (2) Immediate eligibility and participation in a plan. A worker who would be offered a plan has no eligibility waiting period and immediately enrolls. This does not necessarily mean that the participant makes immediate or regular contributions; contribution levels are determined stochastically by PENSIM based on worker characteristics. (3) Delayed retirement. Workers work beyond the retirement age determined by PENSIM in the baseline run. In one scenario, workers work up to one extra year; in another, they delay retirement for up to 3 years, although 70 remains the maximum retirement age. (4) Raised contribution limits. We set annual contribution limits starting in 2007 to $25,000 per individual, up from $15,500 under current law, and $60,000 for combined employer-employee contributions, up from $45,000 under current law. These limits rise with cost of living changes in subsequent years, as is the case in our baseline model. Lifetime summary statistics of the simulated 1990 cohort’s workforce and demographic variables give some insight into the model’s projected DC savings at retirement that we report (see tables 8 and 9). The 78,045 people in the sample who have some earnings, do not immigrate into the cohort after age 25, live to age 55, and retire (or become disabled at age 45 or older), work a median 29.4 years full-time and 2.1 part-time, with median “steady” earnings of $46,122 (in 2007 dollars). Those whose earnings fall in the lowest quartile work full-time for only a median 14.1 years, while working part-time for 9.1 years, and 13.4 years for their longest-tenured job; this group’s median annual steady earnings measure $16,820. In contrast, those in the highest-quartile of earnings work for a median 34.8 years, including 19.5 years for their longest job, and have median steady earnings of $126,380 per year. The results also show that pension coverage varies somewhat across income groups. About 83 percent of workers in the lowest income quartile have at least one job in which they are covered by a DC plan throughout their working careers, and are eligible for DC plan coverage for a median 9.4 years. In contrast, at least 90 percent of workers in the highest three income quartiles have some DC coverage during their careers. Those in the highest income quartile are eligible for DC participation for a median 25.2 years throughout their career. Cross-sectional results of the sample cohort also provide some insights into the model’s assumptions, as well as some further insights into the relatively low projected sample replacement rates (see table 10). These statistics describe the working characteristics for each employed individual at a randomly determined age sometime between 22 and 62 in order to provide a snapshot of a “current” job for most of the sample. Among those employed at the time of the survey, 61.8 percent had an employer who sponsors a DC plan. Of these workers with a plan offered, 94 percent were eligible to participate, and among those eligible 67 percent participated. Taking all of these percentages together, this means that at any one time only 38.9 percent of the working population actively participated in a DC plan in our projections. Even among these participants, only 56.9 percent reported making a contribution to the plan in the previous year, while 45.7 percent had an employer contribution. Median combined employer-employee contributions in the previous year were 6.2 percent of earnings in our simulation. Other studies have projected DC plan savings for workers saving over their entire working careers. These studies generally find higher projected replacement rates from DC plan savings than our simulations do. However, each study makes different key assumptions, particularly about plan coverage, participation, and contributions. A 2007 study by Patrick Purcell and Debra B. Whitman for the Congressional Research Service (CRS) simulates DC plan replacement rates based on earnings, contributions, and the rate of return on plan balances. CRS projects savings for households that begin saving at age 25, 35, or 45. The study estimates 2004 earnings using the March 2005 CPS as starting wages, and assumes that households experience an annual wage growth rate of 1.1 percent. Households are randomly assigned a 6 percent, 8 percent, or 10 percent retirement plan contribution rate every year from their starting age until age 65. The study assumes households allocate 65 percent of their retirement account assets to Standard & Poor’s 500 index of stocks from ages 25 to 34, 60 percent to stocks from ages 35 to 44, 55 percent to stocks from ages 45 to 54, and 50 percent to stocks from age 55 and above, with the remaining portfolio assets invested in AAA-rated corporate bonds. A Monte Carlo simulation based on historical returns on stocks and bonds determines annual rates of return. Replacement rates represent annuitized DC plan balances at age 65 divided by final 5-year average pay. After running the simulations, CRS finds variation in replacement rates depending on rate of return, years of saving, and earning percentile. In the CRS “middle estimate,” the unmarried householder that saves for 30 years, has annual household earnings in the 50th percentile, contributes 8 percent each year until retirement, and earns returns on contributions in the 50th percentile would have a 50 percent replacement rate (see table 11). The projected replacement rate jumps to 98 percent at 40 years of saving, and 22 percent at just 20 years of saving. Assuming a 6 percent annual contribution reduces projected replacement rates by about 10 to 30 percent. For example, an unmarried householder at the 50th percentile of annual earnings and the 50th percentile of returns saving for 40 years is projected to have a replacement rate of 72 percent at a 6 percent annual contribution (see table 12). All CRS estimates, however, exceed those we report in projections in this report, in part because CRS assumes constant participation in, and contributions to, a DC plan. In addition, CRS calculated annuity equivalents of accumulated DC balances based on current annuity prices; for younger workers retiring several decades into the future, we would expect the price of a given level of annuity income to be higher than today’s levels because of longer life expectancies. This would lower the replacement rates for any projected lump sum. A 2005 study, by Sarah Holden of ICI and Jack VanDerhei of EBRI, simulates, as a baseline scenario, retirement savings at age 65 for a group in their 20s and 30s in the year 2000. The baseline assumes workers are continuously covered by a DC plan throughout their career, and that workers will continuously participate. However, the authors also run the model assuming this group will have participation rates similar to current rates by allowing workers to not be covered by, participate in, or contribute to a DC plan. Their model also incorporates the possibility that a participant might cash out a DC plan balance upon leaving a job. Replacement rates are calculated by earnings quartile for participants retiring between 2035 and 2039 as the annuity value of age-65 plan balances divided by final 5-year average pay. The EBRI/ICI baseline projections, starting with a sample of plan participants, show a median replacement rate of 51 percent for the lowest earnings quartile and 67 percent for the highest. (See table 13). The authors analyze the effect of other plan or behavioral assumptions. For example, replacement rates fall significantly when the projections relax the assumption of continuous ongoing eligibility for a 401(k) plan, although they remain higher than our projections, perhaps because the projections start with current participants and assume continuous employment. When the authors include nonparticipants and assume automatic enrollment with a 6 percent employee contribution and investment of assets in a life cycle fund, replacement rates rise significantly from projections without automatic enrollment. Although they project a larger effect on replacement rates resulting from automatic enrollment than our projections show, EBRI/ICI similarly shows a greater increase in savings for lower-income workers. A forthcoming study by Poterba, Venti, and Wise uses the Survey of Income and Program Participation (SIPP) to project DC plan balances at age 65. In order to project participation, the authors assume that DC plan sponsorship will continue to grow, although more slowly than during recent decades. They calculate participation by earnings deciles within 5- year age intervals. The authors assume that 60 percent of plan contributions are allocated to large capitalization equities, and 40 percent to corporate bonds, and assume an average nominal rate of return of 12 percent for equities and 6 percent for corporate bonds. In addition, the authors run a projection assuming the rate of return on equities is 300 basis points less than the historical rate. They determine a person’s likelihood of DC plan participation based on age, cohort, and earnings, as well as the probability of cashing out an existing DC plan balance when someone leaves a job. The authors simulate earnings histories based on data from the Health and Retirement Study (HRS), and impute earnings for younger cohorts for which data are not available. They assume an annual combined employee-employer contribution rate of 10 percent for each year an individual participates, and do not account for increases in annual contributions or changes made to DC plans in the Pension Protection Act, such as a possible increase in participation by automatically enrolling employees. The authors find retirement savings for individuals retiring by decade between 2000 and 2040 by lifetime earning deciles. For workers in the fifth earnings decile retiring in 2030 at age 65, the authors project a mean DC plan balance of $272,135 in 2000 dollars, and $895,179 for the highest earning decile (see table 14). Earners in the lowest and second deciles, however, project average balances of $1,372, and $21,917. The projected average DC plan assets for 2030 retirees fall to $179,540 for the fifth decile of earnings, $614,789 for the highest decile, and $810 for the lowest decile when the authors assume an annual rate of return 300 basis points below the historic rate of return (see table 15). Finally, a 2007 study by William Even and David Macpherson estimates replacement rate for those continuously enrolled in a DC plan between 36 and 65 years of age. The authors simulate a sample using the SCF, and generate an age earnings profile for their sample using data on pension- covered workers in the 1989 SCF. The authors also use the SCF to generate annual contributions to DC plans, which are estimated using a person’s earnings, age, education, gender, race, ethnicity, martial status, union coverage, and firm size. The authors also create an artificial sample for workers who are predicted to be eligible for a DC plan, but choose not to participate. Finally, the authors assume three different rates of return on pension contributions: a 3 percent rate of return based on historical returns on government bonds; a historic returns portfolio based on an account mix of 75 percent in stocks split between large and small capital equities, and 25 percent split between long term corporate bonds, long- term government bonds, midterm government bonds, and Treasury bills; a 6.5 percent real rate of returns based on the average real rate of return on DC plans from 1985 to 1994 for plans with over 100 participants. In calculating annuity rates, the authors rely on mortality tables for group annuitants as opposed to the population as a whole, and do not include the charge the company makes for marketing and administrative expenses. The authors find that replacement rates vary by income distribution. For example, low-income workers who are continuously enrolled in a DC plan at the median replacement rate distribution are estimated to have a 30 percent replacement rate. (see table 16) The average replacement rate for such workers is 44 percent. Middle-income and high-income workers have median replacement rates 31 percent and 35 percent respectively. The authors’ estimates are likely higher than ours because the authors assume continuous enrollment. In addition to the contact above, Charles A. Jeszeck, Mark M. Glickman, Katherine Freeman, Leo Chyi, Charles J. Ford, Charles Willson, Edward Nannenhorn, Mark Ramage, Joe Applebaum, and Craig Winslow made important contributions to this report. | Over the last 25 years, pension coverage has shifted primarily from "traditional" defined benefit (DB) plans, in which workers accrue benefits based on years of service and earnings, toward defined contribution (DC) plans, in which participants accumulate retirement balances in individual accounts. DC plans provide greater portability of benefits, but shift the responsibility of saving for retirement from employers to employees. This report addresses the following issues: (1) What percentage of workers participate in DC plans, and how much have they saved in them? (2) How much are workers likely to have saved in DC plans over their careers and to what degree do key individual decisions and plan features affect plan saving? (3) What options have been recently proposed to increase DC plan coverage, participation, and savings? GAO analyzed data from the Federal Reserve Board's 2004 Survey of Consumer Finances (SCF), the latest available, utilized a computer simulation model to project DC plan balances at retirement, reviewed academic studies, and interviewed experts. GAO's analysis of 2004 SCF data found that only 36 percent of workers participated in a current DC plan. For all workers with a current or former DC plan, including rolled-over retirement funds, the total median account balance was $22,800. Among workers aged 55 to 64, the median account balance were $50,000, and those aged 60 to 64 had $60,600. Low-income workers had less opportunity to participate in DC plans than the average worker, and when offered an opportunity to participate in a plan, they were less likely to do so. Modest balances might be expected, given the relatively recent prominence of 401(k) plans. Projections of DC plan savings over a career for workers born in 1990 indicate that DC plans could on average replace about 22 percent of annualized career earnings at retirement for all workers, but projected "replacement rates" vary widely across income groups and with changes in assumptions. Projections show almost 37 percent of workers reaching retirement with zero plan savings. Projections also show that workers in the lowest income quartile have projected replacement rates of 10.3 percent on average, with 63 percent of these workers having no plan savings at retirement, while highest-income workers have average replacement rates of 34 percent. Assuming that workers offered a plan always participate raises projected overall savings and reduces the number of workers with zero savings substantially, particularly among lower-income workers. Recent regulatory and legislative changes and proposals could have positive effects on DC plan coverage, participation, and savings, some by facilitating the adoption of automatic enrollment and escalation features. Some options focus on encouraging plan sponsorship, while others would create accounts for people not covered by an employer plan. Our findings indicate that DC plans can provide a meaningful contribution to retirement security for some workers but may not ensure the retirement security of lower-income workers. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Trade Adjustment Assistance (TAA) program is the federal government’s primary program specifically designed to provide assistance to workers who lose their jobs as a result of international trade. In addition, to assist U.S. domestic industries injured by unfair trading practices or increases in certain fairly traded imports, U.S. law permits the use of trade remedies, such as duties on imported products. Currently, Labor certifies workers for TAA on a layoff-by-layoff basis. The process for enrolling trade-affected workers in the TAA program begins when a petition for TAA assistance is filed with Labor on behalf of a group of workers. Petitions may be filed by the employer experiencing the layoff, a group of at least three affected workers, a union, or the state or local workforce agency. Labor investigates whether a petition meets the requirements for TAA certification and is required to either certify or deny the petition within 40 days of receiving it. The TAA statute lays out certain basic requirements for petitions to be certified, including that a significant proportion of workers employed by a company be laid off or threatened with layoff and that affected workers must have been employed by a company that produces articles. In addition to meeting these basic requirements, a petition must demonstrate that the layoff is related to international trade in one of several ways: Increased imports—imports of articles that are similar to or directly compete with articles produced by the firm have increased, the sales or production of the firm has decreased, and the increase in imports has contributed importantly to the decline in sales or production and the layoff or threatened layoff of workers. Shift of production—the firm has shifted production of an article to another country, and either: the country is party to a free trade agreement with the United States; or the country is a beneficiary under the Andean Trade Preference Act, the African Growth and Opportunity Act, or the Caribbean Basin Economic Recovery Act; or there has been or is likely to be an increase in imports of articles that are similar to or directly compete with articles produced by the firm. Affected secondarily by trade—workers must meet one of two criteria: Upstream secondary workers—affected firm produces and supplies component parts to another firm that has experienced TAA-certified layoffs; parts supplied to the certified firm constituted at least 20 percent of the affected firm’s production, or a loss of business with the certified firm contributed importantly to the layoffs at the affected firm. Downstream secondary workers—affected firm performs final assembly or finishing work for another firm that has experienced TAA-certified layoffs as a result of an increase in imports from or a shift in production to Canada or Mexico, and a loss of business with the certified firm contributed importantly to the layoffs at the affected firm. Labor investigates whether each petition meets the requirements for TAA certification by taking steps such as surveying officials at the petitioning firm, surveying its customers, and examining aggregate industry data. In the surveys, Labor obtains information on whether the firm is now importing products that it had once produced or whether its customers are now importing products that the firm produced. They also obtain information on whether the firm has moved or is planning to move work overseas and, to identify a secondary impact, whether the layoff occurred due to loss of business with a firm that was certified for TAA. When Labor has certified a petition, it notifies the relevant state, which has responsibility for contacting the workers covered by the petition, informing them of the benefits available to them, and telling them when and where to apply for benefits. If Labor denies a petition for TAA assistance, the workers who would have been certified under the petition have two options for challenging this denial. They may request an administrative reconsideration of the decision by Labor. To take this step, workers must provide reasons why the denial is erroneous based on either a mistake or misinterpretation of the facts or the law itself and must mail their request to Labor within 30 days of the announcement of the denial. Workers may also appeal to the U.S. Court of International Trade for judicial review of Labor’s denial. Workers must appeal a denial to the Court within 60 days of either the initial denial or a denial following administrative reconsideration by Labor. Under TAA, workers certified as eligible for the program may have access to a variety of benefits: Training for up to 130 weeks, including 104 weeks of vocational training and 26 weeks of remedial training, such as English as a second language or adult basic education. Extended income support for up to 104 weeks beyond the 26 weeks of unemployment insurance (UI) benefits available in most states. Job search and relocation benefits fund participants’ job searches in a different geographical area and relocation to a different area to take a job. A wage insurance benefit, known as the Alternative Trade Adjustment Assistance (ATAA) program, pays older workers who find a new job at a lower wage 50 percent of the difference between their new and old wages up to a maximum of $10,000 over 2 years. A health coverage benefit, known as the Health Coverage Tax Credit (HCTC), helps workers pay for health care insurance through a tax credit that covers 65 percent of their health insurance premiums. In addition, case managers provide vocational assessments and counseling to help workers enroll in the program and decide which services or benefits are most appropriate. Local case managers also refer workers to other programs, such as the Workforce Investment Act, for additional services. The United States and many of its trading partners have used laws known as “trade remedies” to mitigate the adverse impact of certain trade practices on domestic industries and workers, notably dumping—when a foreign firm sells a product in the United States at a price below fair market value—and foreign government subsidies that lower producers’ costs or increase their revenues. In both situations, U.S. law provides that if dumped or subsidized imports injure a domestic industry, a duty intended to counter these advantages be imposed on imports. Such duties are known as anti-dumping and countervailing duties. In addition, in the event of an increase in imports of a certain product, safeguards, such as quotas or tariffs, may be applied to these products to provide an opportunity for domestic industries to adjust to increasing imports. As of March 30, 2007, there were 280 antidumping and countervailing duty orders, and according to officials at the ITC, there were no safeguard measures in place. The process for imposing a trade remedy begins when a domestic producer files for relief or when the Department of Commerce (Commerce) initiates the process, followed by two separate investigations: one by Commerce to determine if dumping or subsidies are occurring, and the other by the ITC to determine whether a domestic U.S. industry is materially injured by such unfairly traded imports or, in the case of safeguards, experiences serious injury from a rise in imports. As a result of an affirmative determination by both Commerce and the ITC in an antidumping or countervailing duty investigation, a duty is imposed on the imported good that can reflect the difference in the price in the foreign market and the price in the U.S. market, known as the “dumping margin,” or the amount of the foreign subsidy. In the case of an affirmative determination in a safeguard investigation, the ITC provides the President with one or more recommendations for remedying the situation, such as a tariff or quota on an imported product. The President may implement or modify the recommendations, or take no action due to U.S. economic or national security interests. Labor certified two-thirds of petitions that it investigated over the past 3 fiscal years, certifying nearly 4,700 petitions, covering an estimated 400,000 workers (see table 1). Over the past 3 fiscal years, the number of petitions certified has declined 17 percent, from nearly 1,700 in fiscal year 2004 to 1,400 in fiscal year 2006. This decline parallels a decline in the number of petitions filed. Labor has generally processed petitions in a timely manner over the past 3 fiscal years. Labor’s average processing time has remained relatively steady, taking on average 32 days to conduct an investigation and determine whether to certify or deny the petition. Labor met the requirement to process petitions within 40 days for 77 percent of petitions it investigated during fiscal years 2004 to 2006 (see fig. 1). Labor most often took only an additional day to process the remaining petitions, and 95 percent were completed within 60 days. Labor officials said that they are not always able to meet the 40-day time frame because they sometimes do not receive necessary information in a timely manner from company officials. In fiscal year 2006, the most common reason petitions were denied was that workers were not involved in producing an article, a basic requirement of the TAA program. Of the more than 800 petitions filed in fiscal year 2006 that were denied, 359 (44 percent) were denied for this reason (see fig. 2). Of those petitions denied because workers did not produce articles, most came from two industries, business services, such as computer programming, and airport-related services, such as aircraft maintenance (see app. II for the complete list of industries that had petitions denied by reason for the denial). During the past 3 fiscal years, workers appealed decisions in 16 percent of the approximately 2,600 petitions that Labor initially denied, with the vast majority appealed to Labor. Labor’s decisions were reversed in one-third of the appeals (see fig. 3). Labor officials told us that appeals are often reversed because Labor receives new information from petitioners or company officials, as part of the appeals process, that justifies certifying the petition. Although few denied petitions are appealed to the U.S. Court of International Trade—42 in the last 3 fiscal years—many of the recent appeals concern the issue of whether workers were involved in the production of articles. In fiscal years 2005 and 2006, Labor’s original denial was reversed in 13 cases appealed to the Court, and most of these cases addressed the issue of whether workers produced articles. Some of these cases concerned workers who produced software, which Labor had regarded as a service when the software was not contained in a physical medium, such as a CD-ROM. In 2006, Labor revised its policy, stating that software could be considered an intangible article because it would have been considered an article if it had been produced in a form such as a CD-ROM. Following this decision, a Labor official reported that Labor had certified 12 of 21 petitions investigated in the software and computer- related services industries. An industry certification approach based on three petitions certified within 180 days would likely increase the number of workers eligible for TAA, but presents some design and implementation challenges. For example, among the industries for which we could obtain complete data, we found that the number of additional workers eligible for TAA in those industries could more than double if no additional criteria were used or expand by less than 10 percent with relatively restrictive criteria. However, such an approach presents some design and implementation challenges. For example, designing the specific criteria an industry must meet to be certified could be challenging due to the possibility of making workers who lose their jobs for reasons other than trade eligible for TAA. In addition, it may be challenging to ensure that all workers in certified industries are notified of their potential eligibility for TAA, verify workers’ eligibility, and initiate the delivery of services to workers. From 2003 to 2005, 222 industries had three petitions certified within 180 days and therefore would have triggered an investigation to determine whether an entire industry should be certified, if such an approach had been in place at that time. These industries represented over 40 percent of the 515 industries with at least one TAA certification in those 3 years and included 71 percent of the workers estimated to be certified for TAA from 2003 to 2005. The 222 are a diverse set of industries, including textiles, apparel, wooden household furniture, motor vehicle parts and accessories, certain plastic products, and printed circuit boards (see app. III for a list of the 222 industries). The proposals for this approach include a requirement that an investigation be initiated after an industry meets the three certifications in 180 days criterion. This investigation would use some additional criteria to determine whether these certifications represent a broad industrywide phenomenon or just a collection of firms experiencing similar pressures from foreign trade. As a result, not all 222 industries would likely be certified industrywide. The additional criteria that an industry would have to meet to be certified have not yet been specified, but they could include factors such as the extent to which an industry has been impacted by imports, changes in production levels in the industry, or changes in employment levels. The number of workers that would become eligible for TAA through an industry certification approach depends on what additional criteria are established. For example, we analyzed 69 industries in the manufacturing sector for which we had comprehensive data on petitions, unemployment, trade and production. These industries represent about one-third of the 222 industries that would have been eligible for industrywide certification (13 percent of industries with petitions certified). If there were no additional criteria beyond the 3 petitions criteria and all of the 69 industries had been certified, the number of workers eligible for TAA in these industries would have more than doubled over the number that were actually certified under the current layoff-by-layoff process. However, if certification were limited to those industries that also had a 10 percent increase in the import share of the domestic market over a 1-year period, we estimated that the increase in eligible workers in these industries would have been more modest, at roughly 70 percent. Under a slightly more restrictive criterion—a 15 percent increase in the import share of the domestic market—the increase in the number of eligible workers would be less, an estimated 39 percent in those 69 industries (see fig. 4). If we were able to analyze the program as a whole the magnitude of the increases would likely be different. This would occur, in part, because the number of workers would increase only in those industries that met the three petition criterion and would not increase in those that did not meet the criterion. Thus the multiplier we developed for the 69 industries could not be applied broadly to all 515 industries with certified petitions. More stringent criteria would result in a smaller increase in the number of workers eligible for TAA. For example, if over a 3-year period, an industry were required to have a 15 percent increase in the import share of the domestic market in 1 year, as well as increases in the import share during the 2 other years, we estimated that there would have been a 9 percent increase in the number of workers eligible for TAA in the 69 industries we analyzed. (For further analysis of the 69 industries, see app. IV.) Although industry certification based on three petitions certified in 180 days is likely to increase the number of workers eligible for TAA, it also presents several potential design and implementation challenges. Designing additional criteria for certification. Any industrywide approach raises the possibility of certifying workers who were not adversely affected by trade. Even in industries that are heavily impacted by trade, workers could lose their jobs for other reasons, such as the work being relocated domestically. For example, Labor officials told us that they have denied petitions in the apparel industry, which has been heavily impacted by trade, because the layoff was not related to trade but occurred as a result of work being moved to another domestic location. The risk of certifying non-trade affected workers increases with more lenient criteria for industrywide certification. On the other hand, narrow criteria may limit the potential benefits of industry certification because few industries would be certified. Furthermore, using the same thresholds for all industries would not take into account industry-specific patterns in trade and other economic factors. For example, the import share of the domestic market may be volatile and change significantly from year to year in some industries, while other industries may experience smaller year-to- year growth in imports that could represent a significant impact over time. Determining appropriate duration of certification. Determining the length of time that an industry would be certified may also present challenges. If the length of time is too short, Labor may bear the administrative burden of frequently re-investigating industries that continue to experience trade-related layoffs after the initial certification expires. In addition, a shorter duration may make it difficult for workers to know whether their industry is certified at the particular time that they are laid off. As a result, workers may not know whether they need to file a regular TAA petition to become certified. However, if the time period is too long, workers may continue to be eligible for TAA even if conditions change and an industry is no longer adversely affected by trade. Defining the industries. How the industries are defined would significantly affect the number of workers who would become eligible for TAA through an industry certification approach. Our analysis defined industries according to industry classification systems used by government statistical agencies. However, some of these industry categories are broad and may encompass products that are not adversely affected by trade. On the other hand, certain products within an industry that, as a whole, does not show evidence of a trade impact may have been adversely affected by trade. For example, the men’s footwear industry might not be classified as adversely trade-impacted because it did not have a 15 percent increase in the import share of the domestic market over a 1-year period, but it is possible that certain types of men’s footwear, such as casual shoes or boots, could be adversely impacted by trade. More narrow definitions would reduce the possibility of certifying workers who are not adversely affected by trade, but doing so would cover fewer workers and could increase the administrative burden for Labor because it might have to investigate more industries. Notifying workers and initiating the delivery of services. Notifying workers of their eligibility for TAA has been a challenge and would continue to be under industry certification. Under the current certification process, workers are linked to services through the petition process. The specific firm is identified on the petition application, and state and local workforce agencies work through the firm to reach workers in layoffs of all sizes. However, getting lists of employees affected by layoffs and contacting them is sometimes a challenge for states and would remain so under industry certification. For industry certification, however, there are no such procedures in place to notify all potentially eligible workers in certified industries. For large layoffs in a certified industry, state and local workforce agencies could potentially use some of the processes they currently have in place to connect with workers, but it is not apparent that there would be a built-in link to workers in small layoffs. In large layoffs, firms with 100 or more employees are generally required to provide 60-days advance notice to state and local workforce agencies, who then work with the firm to provide rapid response services and inform workers about the various services and benefits available, including TAA. However, in smaller layoffs in certified industries, or when firms do not provide advance notice, workforce agencies would not know that the layoff has occurred and therefore would not be able to notify the workers of their eligibility for TAA. Verifying worker eligibility. Verifying that a worker was laid off from a job in a certified industry to ensure that only workers eligible for TAA receive TAA benefits may be more of a challenge under industry certification than under the current system. For example, it may be difficult to identify the specific workers who made a product in the certified industry if their employer also makes products that are not covered under industry-wide certification. In order to realize one of the potential benefits of industry certification—reduced processing time— this verification process would need to take less time than it takes workers to become certified through the layoff-by-layoff certification process. As we noted, Labor takes on average 32 days to complete its investigation of a petition, but it generally takes additional time for individuals to be notified of their eligibility. In addition, determining which entity would conduct this verification may also present challenges. A centralized process conducted by Labor would likely be unwieldy, while verification by state or local workforce agencies could take less time but ensuring consistency across the nation might prove challenging. Using trade remedies to certify industries for TAA could expand eligibility for workers in some industries, but the extent is uncertain and there are challenges associated with using trade remedies to identify trade-related job losses. Such an approach could expand eligibility because some trade remedies may cover areas in which there have been few or no certified TAA petitions. However, some trade remedies are for products that may already be covered by TAA petitions, so the number of workers eligible for TAA may not increase substantially in these areas. It is difficult to estimate the impact of this approach on eligibility because trade remedies are applied to specific products, and data on unemployment by product do not exist. In addition, this approach presents some of the same challenges as with an industry certification approach based on three petitions certified in 180 days. For example, workers who did not lose their jobs due to international trade could be made eligible for TAA in part because trade remedy investigations are focused on injury to an industry as a whole and not principally on employment impacts. Using trade remedies for industrywide certification could result in expanded worker eligibility for TAA in a number of industries. For example, 280 antidumping and countervailing duty orders covering over 100 products were in place, as of March 30, 2007. The number of workers eligible for TAA would increase under this approach in areas in which there have been few or no TAA petitions. For example, even though the ITC found that the domestic industry producing certain kinds of orange juice had been materially injured by imports, there do not appear to have been any certified TAA petitions for workers producing orange juice. However, the number of workers eligible for TAA may not increase substantially in certain areas in part because of overlap between trade remedies and TAA petitions. For example, over half of outstanding antidumping and countervailing duty orders are for iron and steel products, which have also received hundreds of petitions under TAA. However, even where the products covered by trade remedies and TAA overlap, eligibility could expand to some unemployed workers whose firms did not submit a petition or did not qualify under current TAA certification criteria. In addition, industries with trade remedies may not necessarily have experienced many trade-related job losses because the ITC investigates whether an industry as a whole has been injured and does not specifically focus on employment, according to an ITC official. Whereas Labor investigates whether increased imports contributed importantly to a layoff or threat of layoff, the ITC looks at a wide range of economic factors including but not limited to employment, such as sales, market share, productivity, and profitability. It is difficult to estimate the extent that industry certification based on trade remedies would increase the number of workers eligible for TAA because trade remedies are imposed on specific products coming from specific U.S. trade partners, and data are not available on job losses at such a detailed level. The product classifications for a given trade remedy can be very narrow, such as “carbazole violet pigment 23” or “welded ASTM A-312 stainless steel pipe.” Estimating the increase in the number of eligible workers would require unemployment information categorized by individual product, and these data do not exist. An approach using trade remedies presents some of the same challenges as an industry certification approach based on three petitions certified in 180 days. Workers who did not lose their jobs due to trade could possibly be made eligible for TAA under a trade remedy approach for several reasons. First, trade remedies are not necessarily an indicator of recent trade-related job losses, in part because the ITC’s process is not employment-focused and even recent injury determinations can be based on several prior years of data. For example, officials at Labor told us that trade remedies have not been useful in their investigations of TAA petitions because they are based on several years’ worth of information and can be unrelated to current industry and employment conditions. Furthermore, trade remedies are intended to mitigate the trade-related factors that caused the injury to the industry, so employment conditions in an industry could improve after the trade remedy is in place. In addition, as with the other industrywide approach, notifying workers in industries with trade remedies and connecting them with services would also be a challenge, as well as verifying that they were laid off from a certified industry. The verification process could be particularly challenging with an approach based on trade remedies because of the narrow product classifications of some trade remedy products. In firms that make multiple products, for example, more than one type of stainless steel pipe, it may be difficult to identify which specific workers worked on the products subject to trade remedies. An ITC official also expressed concern that seeking a waiver to share information collected during injury investigations with Labor could hamper ITC’s ability to collect confidential business information from firms. By statute, the ITC cannot share with other government agencies business proprietary information submitted to it in a trade remedies investigation without a waiver from the submitter. We provided a draft of this report to the Department of Labor and the International Trade Commission. The Department of Labor did not comment. The International Trade Commission provided technical comments, which we incorporated into the report as appropriate. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of the report until 30 days from its issue date. At that time, we will send copies of this report to the Secretary of Labor, the Chairman of the International Trade Commission, relevant congressional committees, and other interested parties. Copies will also be made available to others upon request. The report will also be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me on (202) 512-7215 if you or your staff have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of the report. Key contributors to this report are listed in appendix V. Our objectives were to determine: (1) trends in Labor’s certification of Trade Adjustment Assistance (TAA) petitions, (2) the extent to which industry certification based on three petitions certified within 180 days would increase the number of workers eligible for TAA, and (3) the extent to which certification of industries subject to trade remedies would increase the number of eligible workers. We also identified potential challenges with an industry certification approach. To address these questions, we analyzed Labor’s data on TAA petitions, the Bureau of Labor Statistics’ Mass Layoff Statistics data, the Census Bureau’s data on trade and production, and the International Trade Commission’s data on trade remedies. In examining recent trends in Labor’s certification of TAA petitions, we analyzed Labor’s petitions data from fiscal years 2004 to 2006. However, in estimating the extent to which industry certification would increase the number of eligible workers, we analyzed data from calendar years 2003 to 2005 because that was the most recent time period that trade, production, and Mass Layoffs Statistics data were all available. In addition, we interviewed officials at Labor and the International Trade Commission. We conducted our work from January to June 2007 in accordance with generally accepted government auditing standards. To determine trends in Labor’s certification of TAA petitions, we analyzed Labor’s data for petitions filed from fiscal years 2004 to 2006. We assessed the reliability of key data by interviewing Labor officials knowledgeable about the data, observing a demonstration of the database, reviewing our prior assessments of the data, and conducting edit checks. For a small number of petitions, we identified logical inconsistencies or missing values in the data. We brought these issues to the attention of Labor officials and worked with them to correct the issues before conducting our analysis. Complete data on reasons petitions were denied were only available for fiscal year 2006 because Labor only began to collect the data in 2005. As a result, we reported information on reasons petitions were denied for only fiscal year 2006. In analyzing the number of petitions denied for TAA that were appealed to Labor, we did not include in our analysis petitions that were appealed only for a denial of Alternative Trade Adjustment Assistance (ATAA) wage insurance benefits. These petitions had been certified for TAA after Labor’s initial investigation but denied for wage insurance benefits. In analyzing data on petitions that were appealed to the U.S. Court of International Trade, we compared Labor’s data to Court documents. We determined that Labor’s data on appeals to the Court were not complete. As a result, we supplemented Labor’s data with a review of Court documents. At the time of our review, some petitions filed during fiscal years 2004 to 2006 may still have been undergoing an appeals process. Our analysis of petition decisions and appeals reflect the outcomes of petitions at the time of our review. Despite these limitations, we determined that Labor’s petitions data were sufficiently reliable for the purposes of this report, which was to provide information on trends in Labor’s certification of petitions. To estimate the extent to which industry certification based on three petitions certified in 180 days would increase the number of workers eligible for TAA, we first analyzed Labor’s data on petitions certified from calendar years 2003 to 2005 to identify which industries would have had three petitions certified in 180 days in that time frame. We then analyzed the Bureau of Labor Statistics’ Extended Mass Layoff Statistics survey to determine the number of workers who had been laid off in those industries. We used this data as a proxy for the full number of workers that would have been eligible for TAA had an industrywide certification process been in place at the time. We also collected Census trade data (imports and exports) by industry and Census production data. Of the 222 industries that had three petitions certified in 180 days from 2003 to 2005, we were able to analyze 69 industries for which we had comprehensive data. The available data sources used different industry classification systems which we matched to each other. Labor’s petitions data classified industries according to the Standard Industrial Classification System (SIC), while the Mass Layoff Statistics and Census’ trade and production data classified industries based on variations of the North American Industry Classification System (NAICS). In addition, our production data, from Census’ Annual Survey of Manufacturers, was limited to manufacturing industries. We were able to find complete and well-defined matches (one-to-one NAICS to SIC, or many-to-one NAICS to SIC) between the SIC- and NAICS-defined industries for 69 of the 222 industries. We could not use data on industries where a NAICS code corresponded to multiple SIC codes. Because the 69 industries were not drawn from a random sample, the results of our analysis are not necessarily representative of the entire 222 industries. In analyzing Labor’s petitions data to identify the industries that would have met the three petitions certified in 180 days criteria during 2003 to 2005, we added an additional criterion to our analysis, that the three petitions had to be from three different companies. When companies lay off workers at multiple divisions or locations, they may file separate petitions for each division or location. We added the criterion to prevent an industry from becoming eligible to be considered for certification when the three petitions were from the same company. In estimating the number of workers certified for TAA under the current program, we used estimates from Labor’s petitions data on the number of workers affected by a layoff at the time that TAA petitions are filed with the Department of Labor. At the time petitions are submitted, companies may not know exactly how many workers will be affected. The Department of Labor does not collect information on the number of workers ultimately certified. We used these data to estimate the increases in the number of workers who might be eligible for TAA with the addition of an industry certification approach. They should not be relied upon to support precise numbers on workers certified for TAA. In estimating the increase in worker eligibility, we used data from the Bureau of Labor Statistics’ Mass Layoff Statistics program to estimate the number of unemployed workers in industries that could have been eligible for industrywide certification. The Mass Layoff Statistics program collects reports on mass layoff actions that result in workers being separated from their jobs. Monthly mass layoff numbers are from establishments which have at least 50 initial claims for unemployment insurance (UI) filed against them during a 5-week period. Extended mass layoff numbers (issued quarterly) are from a subset of such establishments—private- sector nonfarm employers who indicate that 50 or more workers were separated from their jobs for at least 31 days. We used Extended Mass Layoffs to reduce the possibility of including workers who were on temporary layoff and subject to recall. Several limitations of the Mass Layoff Statistics data are relevant to this analysis. First, Mass Layoff Statistics only include workers from larger firms (workers with at least 50 employees). It also only includes workers laid off through larger layoffs (layoffs of at least 50 employees). In 2003, there were 1,404,331 initial claims in the Extended Mass Layoffs. In contrast, estimated unemployment due to permanent layoffs in 2003 was 2,846,000, and total unemployment in 2003 was 8,774,000. Thus, workers involved in extended mass layoffs represented 49 percent of permanent layoffs and 16 percent of total unemployment in 2003. Second, the Bureau of Labor Statistics suppressed MLS data on the number of layoffs and the number of workers in the data they provided to us, when the number of layoffs in an industry was less than three. Approximately 40 percent of the data were suppressed. The results reported here reflect the following imputation criterion. Suppressed values were imputed with the mean number of laid-off workers per layoff event within the broad industry group (by year), multiplied by 1.5. We also conducted sensitivity analysis where we used more conservative imputation criteria; results were not highly sensitive to the criterion we selected. In order to identify changes in trade related to individual industries, we calculated the share of imports in the domestic market for each year between 2002 and 2005. We defined the domestic market as U.S. domestic production (measured by shipments) minus exports plus imports. We then calculated the change in the import share of the domestic market from 2002 to 2003, 2003 to 2004, and 2004 to 2005. We examined the distribution of these changes annually across industries and calculated sample statistics. The mean change for the 69 industries was 6.1 percent in 2003, 13.01 percent in 2004, and 3.5 percent in 2005. The standard deviation for these industries was 11.0 percent in 2003, 30.25 percent in 2004, and 13.74 percent in 2005. We also compared these statistics to those of the entire population of 222 industries and found that they were similar. Based on this analysis, we determined that using criteria of annual changes of 10, 15, and 20 percent for the share of imports in the domestic market was reasonable to illustrate the impact of changing criteria on the number of potential workers eligible for TAA. We also examined compound annual changes across all 4 years and found similar results. To determine the extent to which eligibility for TAA would expand if trade remedies were used to certify industries for TAA, we reviewed the industries and products covered by antidumping and countervailing duties. To the extent possible, we assessed areas of overlap between TAA petitions and trade remedies. We also compared the eligibility criteria for TAA with ITC’s process for determining if an industry has been injured. To identify potential challenges with industry certification, we interviewed officials at the Department of Labor and International Trade Commission. Tables 5 and 6 list the 222 industries that had three petitions certified in 180 days from 2003 to 2005. Table 5 lists the 69 industries we were able to analyze with trade and unemployment data, and table 6 lists the industries we were not able to analyze due to data limitations. To assess the sensitivity of the criteria to changes in the trade threshold, we analyzed the 69 industries for which we had complete data using a range of thresholds. We found that more stringent criteria sometimes resulted in appreciable differences in the number of workers eligible for TAA in those industries. For example, if to be certified, an industry not only had to have a 10 percent increase in the import share of the domestic market in 1 year but also had an increase in the import share during the 2 other years between 2003 and 2005, we estimated that there would have been a 24 percent increase in the number of workers eligible for TAA in the 69 industries we analyzed (see table 7). Because the 69 industries for which we had comprehensive data were not selected randomly, the results cannot be generalized to the entire group of 222 industries that met the three petitions certified in 180 days criteria nor are they predictive of future levels of eligible workers. Dianne Blank, Assistant Director Yunsian Tai, Analyst-in-Charge Michael Hoffman, Rhiannon Patterson, and Timothy Wedding made significant contributions to all aspects of this report. In addition, Kim Frankena assisted with research on trade remedies, and Rachel Valliere provided writing assistance. Christopher Morehouse, Theresa Lo, Mark Glickman, Jean McSween, and Seyda Wentworth verified our findings. Trade Adjustment Assistance: Program Provides an Array of Benefits and Services to Trade-Affected Workers. GAO-07-994T. Washington, D.C.: June 14, 2007. Trade Adjustment Assistance: Changes Needed to Improve States’ Ability to Provide Benefits and Services to Trade-Affected Workers. GAO-07-995T. Washington, D.C.: June 14, 2007. Trade Adjustment Assistance: Changes to Funding Allocation and Eligibility Requirements Could Enhance States’ Ability to Provide Benefits and Services. GAO-07-701, GAO-07-702. Washington, D.C.: May 31, 2007. Trade Adjustment Assistance: New Program for Farmers Provides Some Assistance, but Has Had Limited Participation and Low Program Expenditures. GAO-07-201. Washington, D.C.: December 18, 2006. Trade Adjustment Assistance: Labor Should Take Action to Ensure Performance Data Are Complete, Accurate, and Accessible. GAO-06-496. Washington, D.C.: April, 25, 2006. Trade Adjustment Assistance: Most Workers in Five Layoffs Received Services, but Better Outreach Needed on New Benefits. GAO-06-43. Washington, D.C.: January 31, 2006. Trade Adjustment Assistance: Reforms Have Accelerated Training Enrollment, but Implementation Challenges Remain. GAO-04-1012. Washington, D.C.: September 22, 2004. | Trade Adjustment Assistance (TAA) is the nation's primary program providing job training and other assistance to manufacturing workers who lose their jobs due to international trade. For workers to receive TAA benefits, the Department of Labor (Labor) must certify that workers in a particular layoff have lost their jobs due to trade. Congress is considering allowing entire industries to be certified to facilitate access to assistance. GAO was asked to examine (1) trends in the current certification process, (2) the extent to which the proposed industry certification approach based on three petitions certified in 180 days would increase eligibility and identify potential challenges with this approach, and (3) the extent to which an approach based on trade remedies would increase eligibility and identify potential challenges. To address these questions, GAO analyzed data on TAA petitions, mass layoffs, trade, production, and trade remedies. GAO also interviewed Labor and ITC officials. GAO is not making recommendations at this time. Labor reviewed the report and did not provide comments. The ITC provided technical comments that have been incorporated as appropriate. During the past 3 fiscal years, Labor certified about two-thirds of TAA petitions investigated and generally processed petitions in a timely manner. Labor certified 4,700, or 66 percent, of the 7,100 petitions it investigated from fiscal years 2004 to 2006. Labor took on average 32 days to make a certification decision and processed 77 percent of petitions within the required 40-day time frame. According to Labor officials, they were not always able to meet the 40-day time frame because they sometimes did not receive information from company officials in a timely manner. In fiscal year 2006, 44 percent of the petitions that Labor denied were because workers were not involved in the production of an article. An industry certification approach based on three petitions certified in 180 days would likely increase the number of workers eligible for TAA but presents some design and implementation challenges. However, the extent of the increase in eligible workers depends on the additional criteria, if any, industries would have to meet to be certified. From 2003 to 2005, 222 industries had three petitions certified within 180 days. Based on our analysis of 69 of these industries for which we could obtain complete data, the number of eligible workers in these industries could more than double if no additional criteria were used, but would expand by less than 10 percent if industries had to meet more restrictive criteria, such as demonstrated increases in the import share of the domestic market over a 3-year period. Designing the criteria presents challenges due to the possibility of making workers who lose their jobs for reasons other than trade eligible for TAA. Implementation challenges include notifying all workers of their potential eligibility, verifying their eligibility, and linking them with services. Using trade remedies to certify industries could also expand eligibility for workers in some industries, but challenges exist. While basing industry certification on trade remedies could expand eligibility in areas where there have been no TAA petitions, some trade remedies are for products already covered by TAA petitions, such as iron and steel products. It is difficult to estimate the extent of the impact on worker eligibility because trade remedies are applied to specific products, and data on unemployment by product do not exist. This approach presents many of the same challenges as industry certification based on three petitions certified in 180 days. For example, workers who did not lose their jobs due to international trade could be made eligible for TAA because trade remedy investigations are not focused on employment. In addition, verifying workers' eligibility may be particularly challenging due to the narrow product classifications of some trade remedy products, such as carbazole violet pigment 23. In companies that make multiple products, it may be difficult to identify which specific workers made the product subject to trade remedies. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
ATF’s mission is to protect communities from violent criminals, criminal organizations, and illegal use and trafficking of firearms, among other things. To fulfill this mission, ATF has 25 field divisions located throughout the United States. To efficiently and effectively carry out its criminal enforcement responsibilities related to firearms, ATF maintains certain computerized information on firearms, firearms transactions, and firearms purchasers. To balance ATF’s law enforcement responsibility with the privacy of firearms owners, Congress has required FFLs to provide ATF certain information about firearms transactions and the ownership of firearms while placing restrictions on ATF’s maintenance and use of such data. In addition to its enforcement activities, ATF also regulates the firearms industry, including issuing firearms licenses to prospective FFLs, and conducting FFL qualification and compliance inspections. A critical component of ATF’s criminal enforcement mission is the tracing of firearms used in crimes to identify the first retail purchaser of a firearm from an FFL. The Gun Control Act of 1968, as amended, established a system requiring FFLs to record firearms transactions, maintain that information at their business premises, and make these records available to ATF for inspection and search under certain prescribed circumstances, such as during a firearms trace. The system was intended to permit law enforcement officials to trace firearms involved in crimes while allowing the records themselves to be maintained by the FFLs rather than by a governmental entity. Figure 1 shows one possible scenario in which a firearm is purchased at an FFL, the FFL maintains records on the purchase, the firearm is used in a crime, and a law enforcement agency recovers the firearm and submits it for tracing. Through the use of these records maintained by FFLs and provided to ATF in certain circumstances, ATF provides firearms tracing services to federal, state, local, and foreign law enforcement agencies. The objective of the trace is to identify the first retail purchaser of the firearm. To carry out its firearms tracing responsibilities, ATF maintains a firearms tracing operation at NTC in Martinsburg, West Virginia. As shown in figure 2, NTC traces firearms suspected of being involved in crimes to the first retail purchaser to assist law enforcement agencies in identifying suspects. NTC generally receives trace requests through eTrace, a web-based submission system, but also receives requests by fax, telephone, and mail. To conduct a trace, NTC must receive the recovered firearm’s description—including manufacturer and serial number—from the law enforcement agency. NTC determines the ownership of the firearm by first conducting automated checks of data systems that are maintained at NTC. If these automated checks do not identify a matching firearm description within the systems, an NTC analyst contacts the chain of distribution for the firearm—the series of businesses that are involved in manufacturing and selling the firearm. For example, after automated data system checks, an NTC analyst may call the manufacturer of the firearm, who informs NTC that the firearm was sold to a certain distributor. The NTC analyst will then call that distributor, and so on until the individual is identified. For many traces, an FFL in the chain of distribution has gone out of business, so an NTC analyst must consult the FFL’s out-of- business records, which are also maintained by NTC. ATF documents each trace request and its results, and provides that information to the law enforcement requester. ATF considers a request completed when it traces the firearm to a retail purchaser, or when it cannot identify the purchaser for various reasons. For example, the description of the firearm as submitted by the requester may not have contained sufficient information to perform a trace. For fiscal year 2015, ATF received a total of 373,349 trace requests, completed 372,992 traces, and identified a retail FFL or a purchaser of the traced firearm in about 68 percent of the completed traces. Since the passage of the Gun Control Act of 1968, Congress has passed provisions that place restrictions on ATF’s handling of FFL records. In 1978, citing to the general authorities contained in the Gun Control Act, ATF proposed regulations that would have required FFLs to report most of their firearms transactions to ATF through quarterly reports. Under the proposed regulations, these FFL reports of sales and other dispositions would not have identified a nonlicensed transferee, such as a retail purchaser, by name and address. These proposed regulations prompted concerns from those who believed that the reporting requirements would lead to the establishment of a system of firearms registration. Since then, Congress has placed restrictions on ATF’s use of funds to consolidate or centralize firearms records, as discussed below. In 1978, the Treasury, Postal Service, and General Government Appropriations Act, 1979, prohibited the use of funds for administrative expenses in connection with the consolidation or centralization of FFL records at the agency, or the final issuance of the 1978 proposed regulations. This restriction was included in each of ATF’s annual appropriations through fiscal year 1993. In 1993, the Treasury, Postal Service, and General Government Appropriations Act, 1994, removed the reference to the 1978 proposed rules, but expanded the prohibition to include the consolidation or centralization of portions of records, and to apply to the use of funds for salaries as well as administrative expenses. This provision was included in each of ATF’s annual appropriations through fiscal year 2011. “hat no funds appropriated herein or hereafter shall be available for salaries or administrative expenses in connection with consolidating or centralizing, within the Department of Justice, the records, or any portion thereof, of acquisition and disposition of firearms maintained by Federal firearms licensees.” ATF collects and maintains data from the firearms industry to carry out its criminal and regulatory enforcement responsibilities, and has established 25 national ATF data systems relating to firearms to maintain the data it collects. Of these 25 data systems, the following 16 data systems contain retail firearms purchaser information: 1. Access 2000 (A2K) 2. ATF NICS Referral 3. Firearm Recovery Notification Program (FRNP) 4. Firearms and Explosives Import System 5. Firearms Information Reporting System 6. Firearms Tracing System 9. Multiple Sales (MS) 10. National Firearms Act System / National Firearms Registration and Transfer Record System 14. Out-of-Business Records Imaging System (OBRIS) 15. Suspect Person Database More details on these systems are provided in appendix II. From the 16 data systems that contain retail purchaser information, we selected 4 systems for an in-depth review of compliance with the appropriations act restriction on consolidation or centralization, and adherence to ATF policies: OBRIS, A2K, FRNP, and MS, including Demand Letter 3. See appendix I for our selection criteria. These systems are operated and maintained by NTC and play a significant role in the firearms tracing process as shown in figure 3. OBRIS is a repository of nonsearchable images of firearms records that allows NTC employees to manually search for and retrieve records during a firearms trace using an FFL number and a firearm description (e.g., serial number). Out-of-business records are integral to the firearms tracing process. According to ATF officials, in approximately 35 to 38 percent of trace requests, there is at least one entity in the chain of distribution that has gone out of business. Therefore, in more than one- third of firearms trace requests, NTC analysts must consult OBRIS at least once. According to ATF data, as of May 5, 2016, there were 297,468,978 images of firearms records in OBRIS. Further, in fiscal year 2015, NTC accomplished 134,226 of 372,992 total completed trace requests using OBRIS. OBRIS was developed in 2006 to assist NTC with maintaining the out-of- business FFL records that are received each year. By statute, when FFLs discontinue their businesses and there is no successor, the records required to be kept under the Gun Control Act of 1968, as amended, must be delivered within 30 days to the Attorney General. This includes all acquisition and disposition logbooks, firearms transactions records—such as Form 4473 that contains purchaser information—and other required records. NTC receives an average of about 1.9 million out-of-business records per month, of which a large percentage are paper-based. Since 2006, when paper records are received from an FFL that has gone out of business, NTC scans them as TIFF image files and stores them in OBRIS. By design, the files are stored as images (with no optical character recognition) so that they cannot be searched using text queries. In addition, ATF sometimes receives electronic FFL out-of- business records in the forms of computer external removable drives and hard drives. In these cases, ATF converts the data to a nonsearchable format consistent with OBRIS records. During processing of OBRIS records, NTC conducts a quality-assurance process, including document sorting, scanning, and error checks on 100 percent of the records received. Officials stated that the imaged records are maintained indefinitely in OBRIS. For more information on OBRIS, see appendix III. ATF implemented A2K in 1995 at the request of firearms industry members to allow manufacturer, importer, and wholesaler FFLs to more efficiently respond to requests from NTC for firearms traces. By statute, FFLs are required to respond within 24 hours to a firearms trace—a request from ATF for firearms disposition information—needed for a criminal investigation. Normally, when an NTC analyst contacts an FFL in the chain of distribution during a trace, the analyst contacts the FFL by phone, fax, or e-mail. ATF officials reported that this can be burdensome if the FFL receives a large number of trace requests, and that such requests can number more than 100 per day. With A2K—a voluntary program—the participating industry member uploads electronic firearms disposition records (i.e., information on the FFL or, in rare cases, the individual to whom the firearm was sold) onto a server that ATF owns and maintains, but is located at the site of the industry member. A2K provides a secure user web interface to this server, through which authorized NTC personnel can search—by firearm serial number only—to obtain disposition data for a firearm during a trace. According to the A2K memorandum of understanding with industry members, each participating industry member maintains ownership over its data. Further, NTC access to A2K’s search function is limited to analysts conducting traces for each particular industry member. NTC analysts access A2K using a different URL and login information for each participating industry member, and can only retrieve the disposition data for the particular firearm they are tracing. Participation in A2K is voluntary and, according to ATF officials and the three industry members we spoke with, can reduce an industry member’s costs associated with responding to firearms trace requests. According to ATF officials, as of April 25, 2016, there are 35 industry members using A2K, which account for 66 manufacturer, importer, and wholesaler FFLs. All three of the participating industry members we spoke with agreed that A2K has been beneficial since it reduces the industry member resources necessary to respond to trace requests. A2K also benefits NTC by providing immediate access to industry member data at all times, thereby allowing tracing operations to continue outside of normal business hours, which can be crucial for urgent trace requests. According to ATF data, as of March 17, 2016, there were 290,256,532 firearms in A2K. Further, in fiscal year 2015, NTC accomplished 130,982 of 372,992 total completed trace requests using A2K. Established in 1991, FRNP (formerly known as the Suspect Gun Program) provides a criminal investigative service to ATF agents by maintaining a database of firearms that have not yet been recovered by law enforcement, but are suspected to be involved in criminal activity. An ATF agent submits firearms information to FRNP, in connection with a specific ATF criminal investigation, to flag a particular firearm so that in the event that it is recovered and traced at some future time, the requesting agent will be notified. A request to enter a firearm into FRNP could start with an ATF agent recovering another firearm during an undercover investigation of illegal sales from a firearms trafficker. By searching eTrace, the agent may discover that the recovered firearm was part of a multiple sale with three other firearms. The ATF agent then may request that the other three firearms be entered into FRNP because they are associated with the firearm the agent recovered and, therefore, are likely to also be trafficked. ATF officials stated that, in this hypothetical case, it is likely that those three firearms, if recovered and traced in the future, would support a potential firearms trafficking case. If the firearms are in FRNP, if and when they are recovered and traced, NTC would notify the requesting agent, who could then contact the agency that recovered and traced the firearms to coordinate building such a case. To enter a firearm into FRNP, an ATF agent submits ATF Form 3317.1 (see app. IV) to NTC. According to ATF, no other law enforcement agencies may submit firearms to FRNP or view information in the system; only ATF agents and NTC staff have access. When a firearm is recovered in a crime and is traced, NTC conducts an automated check to determine whether the firearm description in the trace request matches a firearm description in FRNP. If so, an analyst will validate that the entries match. If they do, NTC generally notifies the ATF agent who submitted the firearm for inclusion in FRNP that the firearm has been recovered and traced. Then, the analyst completes the trace and sends the results to the requester of the trace. Occasionally, in submitting the firearm to FRNP, the agent directs NTC to not complete the trace on the firearm in the event that the firearm is recovered and traced (i.e., not provide the trace results to the law enforcement agency who requested the trace). For example, an agent might want to prevent trace information from being released to protect an undercover operation or other investigation. According to ATF data, as of May 3, 2016, there were 174,928 firearms and the names of 8,705 unique persons (e.g., criminal suspects, firearms purchasers, associates) in FRNP, making up 41,964 total FRNP records. Further, in fiscal year 2015, NTC accomplished 110 of 372,992 total completed trace requests using FRNP. Also, according to ATF data, as of May 5, 2016, there were 23,227 firearms in FRNP that had been linked to a firearms trace. Once the ATF investigation that led to the FRNP firearms submission has been closed, any FRNP entries associated with that investigation are to be labeled as “inactive” in FRNP. Information from inactive records is used to assist with the tracing process, but when a trace hits on an inactive FRNP record, NTC does not notify the ATF agent who submitted the firearm since the associated investigation is closed and the information would no longer be useful to the agent. According to our review of all FRNP records, as of July 2015, about 16 percent of the 41,625 records were designated “active” and about 84 percent were designated “inactive.” Inactive records remain in the system for tracing purposes. The original submission form is also preserved as a digital image. MS was developed in 1995 to collect and track reports of the purchase by one individual of two or more pistols or revolvers, or both, at one time or during any 5 consecutive business days. FFLs are required by statute to report these sales to ATF. The multiple sales reports are completed by FFLs, submitted to NTC using ATF form 3310.4 (see app. V), and entered into MS. According to ATF, these reports, when cross-referenced with firearms trace information, serve as an important indicator in the detection of potential firearms trafficking. They can also allow successful tracing of older firearms that have reentered the retail market. MS also maintains the information from Demand Letter 3 reports. In 2011, ATF issued Demand Letter 3 to dealer and pawnbroker FFLs located in Arizona, California, New Mexico and Texas. The letter requires these FFLs to prepare reports of the purchase or disposition of two or more semiautomatic rifles capable of accepting a detachable magazine and with a caliber greater than .22, at one time or during any 5 consecutive business days, to a person who is not an FFL. According to ATF, this information is intended to assist ATF in its efforts in investigating and combatting the illegal movement of firearms along and across the southwest border. Demand Letter 3 reports are completed by FFLs, submitted to NTC using ATF form 3310.12 (see app. VI), and entered into MS. According to ATF officials and our observations, Demand Letter 3 and multiple sales reports are managed identically within MS. During a firearms trace, MS is automatically checked for a match with the firearm serial number. If a match is found, the trace time can be substantially shortened since the retail FFL and purchaser name to complete the trace are contained within the MS record. According to ATF data, as of May 3, 2016, there were 8,950,209 firearms in MS, making up 3,848,623 total MS records. Further, in fiscal year 2015, NTC accomplished 15,164 of 372,992 total completed trace requests using MS. In November 1995, ATF implemented a policy to computerize multiple sales reports at NTC, which now also applies to Demand Letter 3 reports. The original multiple sales or Demand Letter 3 paper report received from the FFL is scanned in a nonsearchable, TIFF image format and tagged with the MS transaction number. The TIFF file is then stored in an image-only repository, and is retained indefinitely. However, as part of the computerization policy, ATF included a requirement for deleting firearms purchaser names from MS 2 years after the date of sale if such firearms are not connected to a trace. ATF preserves the remainder of the data, such as the firearm description, for the purpose of supporting investigations. In contrast, if an MS record is connected to a firearms trace, then ATF preserves the entire record, including purchaser information, in the system. MS reports are available to any ATF staff that has access to eTrace but not to outside law enforcement agencies with eTrace access. However, after the purchaser name in a MS record has been deleted in accordance with the 2-year deletion policy, only NTC officials have access to this information in the digital image of the original multiple sales or Demand Letter 3 reports. If an ATF agent needs to see the deleted information, the agent must contact NTC. Of the four data systems we reviewed, two systems were in full compliance with the appropriations act restriction. The other two data systems did not always comply with the restriction, although ATF addressed the compliance issues during the course of our review. In addition, three data systems could better adhere to ATF policies. Specifically: OBRIS complies with the appropriations act restriction and adheres to ATF policies. A2K for in-business industry members’ records complies with the appropriations act restriction, but ATF’s collection and maintenance of A2K out-of-business records in A2K on a server at NTC violated the appropriations act restriction. ATF deleted the records from the server in March 2016. In addition, industry members may benefit from clearer ATF guidance to ensure that they are submitting out-of-business records as required. FRNP generally complies with the appropriations act restriction. However, a regional program using FRNP from 2007 through 2009 did not comply with the restriction, and ATF removed the data it collected through this program from FRNP in March 2016. Further, FRNP generally adheres to ATF policies, but a technical defect allows ATF agents to view and print FRNP data beyond what ATF policy permits. MS complies with the appropriations act restriction, but ATF continues to inconsistently adhere to its own policy when deleting these records. For a more detailed legal analysis of compliance with the appropriations act restriction, see appendix VII. We previously considered ATF’s compliance with the restriction on using appropriated funds for consolidation or centralization in connection with ATF’s Microfilm Retrieval System and MS in 1996. In that report, we stated that the appropriations act restriction did not preclude all information practices and data systems that involved an element of consolidation or centralization. We interpreted the restriction in light of its purpose and in the context of other statutory provisions governing ATF’s acquisition and use of information on firearms. We found that the two systems complied with the appropriations act restriction on the grounds that ATF’s consolidation of records in these systems was incident to carrying out specific responsibilities set forth in the Gun Control Act of 1968, as amended, and that the systems did not aggregate data on firearms transactions in a manner that went beyond these purposes. We are employing a similar analytical approach to the systems under review here: we consider whether ATF’s aggregation of records in each system serves a statutory purpose, and how it relates to that purpose. OBRIS complies with the appropriations act restriction and adheres to policies designed to help ensure that the system is in compliance with the restriction. FFLs are specifically required to submit records to ATF when going out of business, and the system limits the accessibility of key firearms records information, such as retail purchaser data. As we reported in 1996, ATF first issued regulations in 1968 requiring FFLs that permanently go out of business to deliver their firearms transaction records to the federal government within 30 days. This provided a means of accessing the records for firearms tracing purposes after an FFL went out of business. The legislative history related to ATF’s fiscal year 1979 appropriation did not provide any indication that Congress intended a change in ATF’s existing practice. In 1986, the Firearms Owners’ Protection Act (FOPA) codified this regulatory reporting requirement, affirming ATF’s authority to collect this information. In 1996, we also reported that the predecessor to OBRIS—the Microfilm Retrieval System—as designed, complied with the statutory data restrictions and that ATF operated the system consistently with its design. We found that the Microfilm Retrieval System included in a computerized index the information necessary to assist ATF in completing a firearms trace, and did not aggregate information in a manner beyond that necessary to implement the Gun Control Act. Notably, ATF’s system of microfilmed records did not capture and store certain key information, such as firearms purchaser information, in a searchable format. In response to logistical challenges and technological advances, ATF developed OBRIS in 2006 as the repository to maintain digital images of out-of-business FFL records. ATF transitioned from using microfilm images of records to scanning records into OBRIS as digital images not searchable through character recognition, consistent with ATF’s design and use of its prior Microfilm Retrieval System. It is our view that, like its microfilm predecessor system, OBRIS also complies with the appropriations act restriction because OBRIS’s statutory basis and accessibility are essentially the same as the prior system. As with the prior system, OBRIS generally allows users to identify potentially relevant individual records through manual review by searching an index using an FFL number. Other information, specifically firearms purchaser information, remains stored in nonsearchable images, and is not accessible to ATF through a text search. In OBRIS, ATF put data processing policies in place to maintain records in compliance with the appropriations act restriction. Specifically, when an FFL going out of business sends records to NTC, according to ATF policy and verified by our observations, NTC personnel follow policies to sort and scan the records in OBRIS in a manner that maintains the nonsearchability of the records. For example, NTC personnel spend extra time indexing the images by FFL number, and chronologically sorting FFL records, typically by month and by year. When tracing a firearm, according to ATF policy and verified by our observations, NTC personnel generally identify a group of FFL records through the FFL number index, then manually search the dates of the FFL records to narrow in on a group of records that might contain the firearm being traced. NTC personnel then manually skim through each record in this group until they identify the relevant firearm information. According to NTC officials, NTC staff sometimes search thousands of pages of records to find the record that matches the trace request. This policy for a manual process to maintain and use records in OBRIS helps to ensure its compliance with the appropriations act restriction. For more details on OBRIS’s data processing policies, see appendix III. ATF maintains A2K for in-business industry members who store their own A2K data and maintained A2K for certain records of out-of-business industry members at NTC. ATF’s collection and maintenance of the records of out-of-business A2K industry members at NTC violated the appropriations act restriction on consolidation or centralization of firearms records. However, ATF officials transferred the records to OBRIS, and in March 2016 removed these records from A2K. In addition, industry members would benefit from clearer A2K guidance from ATF to ensure that they are submitting required out-of-business records. A2K for firearms records of in-business industry members complies with the appropriations act restriction on consolidation and centralization based on A2K’s statutory foundation and its features. ATF believes, and we agree, that A2K for in-business records appropriately balances the restriction on consolidating and centralizing firearms records with ATF’s need to access firearms information in support of its mission to enforce the Gun Control Act of 1968, as amended. Federal law requires FFLs to provide firearms disposition information to ATF within 24 hours in response to a trace request in the course of a criminal investigation. ATF officials told us that they developed A2K in response to industry member requests for an automated option for responding to trace requests. Prior to A2K, FFLs could only respond to trace requests by having dedicated personnel research firearms disposition information and then submit that information to ATF by phone, fax, or e-mail. In contrast, A2K provides industry members—who voluntarily participate in A2K—with servers to facilitate automated electronic responses to ATF trace requests. Under A2K, industry members upload their electronic firearms disposition information onto the servers located at their premises on a regular basis. Industry members— not ATF—retain possession and control of their disposition records and, according to ATF officials, they may withdraw from A2K and remove their records from the servers at any time. A2K includes a secure user web interface to each of the servers and ATF may only obtain A2K disposition information by searching individual industry member servers by exact firearm serial number. Through this search, ATF obtains the same information from each industry member as it would otherwise obtain by phone, fax, or e-mail, and in similar disaggregated form. Beginning in 2000, ATF maintained A2K disposition data from out-of- business industry members on a single partitioned server within NTC, and removed the records from the server in March 2016. ATF’s maintenance of the disposition records in this manner violated the appropriations act restriction on consolidation or centralization. This arrangement was not supported by any specific authority. As described earlier, A2K was designed as an alternative for FFLs to meet the requirement to respond promptly to ATF trace requests, which does not apply to FFLs once they go out of business. Another statutory provision requires FFLs to submit firearms records to ATF when they go out of business, and ATF has designed a separate system for this purpose—OBRIS—as described earlier. A2K for out-of-business records functioned differently than OBRIS and went beyond the consolidation of out-of-business records in that system incident to specific responsibilities under the Gun Control Act. As discussed earlier, out-of-business records are maintained as nonsearchable digital images in OBRIS to comply with the appropriations act restriction, while at the same time allowing ATF to perform its tracing function. ATF completed traces using A2K disposition data from out-of- business industry members through the same type of secure user web interface as used while the industry members were in business. According to ATF, this was more efficient than relying on OBRIS to complete firearms traces. Our observations of A2K out-of-business searches in August 2015 confirmed ATF officials’ statements that these records were accessed in the same way as in-business records. Records were only retrievable by exact serial number search, in accordance with ATF policy. However, according to ATF officials, it would have been technically possible for ATF to reconfigure the server to allow the records to be queried by any field, including fields with retail purchaser information. ATF agreed with our assessment that treating disposition information from industry members that go out of business in the same manner as disposition information from in-business industry members would violate the appropriations act restriction. After we raised concerns about A2K out-of-business records on the server at NTC, ATF told us that they had begun a process of transferring the out-of-business A2K records from the server into OBRIS as digital images. ATF permanently deleted the records from the out-of-business A2K server in March 2016. In addition, ATF could provide clearer ATF guidance to ensure that industry members submit out-of-business records in accordance with the Gun Control Act of 1968, as amended. These industry members and their corresponding FFLs are required to provide transaction forms, acquisition records, and disposition records to ATF within 30 days of going out of business. However, it is unclear how the requirements apply to industry members’ A2K disposition data. A2K agreements specifically state that the A2K data belong to the industry member. Conversely, ATF requires that the ATF-owned A2K equipment be returned when industry members go out of business, which includes the hardware and software on which the data were housed at the industry member’s location. The A2K memorandums of understanding and ATF guidance to industry members do not specify that industry members may retain the backup disk or how A2K data may be used to meet the out-of-business record submission requirements to ATF, if at all. All of the eight industry members that have gone out of business have provided their backup disks with data to ATF. According to ATF, six industry members separately provided their acquisition and disposition information, while the other two industry members, which were licensed importers, only provided invoices. According to ATF officials, discussions with these industry members did not include the industry member’s option to keep the backup disk where the data are stored or whether submitting the backup disk to ATF would fulfill part of the industry member’s submission requirement. Further, the three industry members we spoke with corroborated that ATF lacks guidance for its requirements related to industry members submitting out-of-business A2K data in accordance with the Gun Control Act, as amended. Federal internal control standards require that agencies communicate necessary quality information with external parties to achieve agency objectives, which includes providing industry members with record submission guidance so that ATF has the necessary records for firearms tracing. According to ATF officials, ATF has not provided guidance to A2K industry members on how to submit out-of-business records because industry members already have the standard requirements that apply to all FFLs, and industry members have not asked for guidance specific to A2K. Industry members that we spoke to had not contemplated the process for providing A2K equipment and records to ATF because they did not anticipate going out of business. However, if ATF does not have all required out-of-business records, the agency may not be able to locate the first purchaser of a firearm during a trace, and thus may not be able to fulfill part of its mission. ATF officials agreed that providing such guidance—for example, in the A2K memorandum of understanding between an industry member and A2K—would be helpful to industry members to ensure that records are submitted to ATF as required. Industry members could benefit from clear ATF guidance on, for example, whether they are required to submit their A2K records in electronic format; whether they are allowed to only submit hard copy records; or what to do if one part of the company goes out of business, but A2K continues at the industry member’s remaining FFLs. Such ATF guidance could clarify how industry members may submit A2K data to fulfill a portion of Gun Control Act requirements. FRNP generally complies with the appropriations act restriction and generally adheres to ATF policies that help ensure such compliance. However, a regional ATF program using FRNP from 2007 through 2009 was not in compliance with the appropriations act restriction. ATF deleted the data it collected through this program from FRNP in March 2016. In addition, a technical defect in one of ATF’s key data systems allows ATF agents to access FRNP records in a manner that is inconsistent with ATF policy. ATF gathers and combines specific firearms transaction data to a limited degree in FRNP in order to implement its statutory responsibilities related to firearms criminal enforcement and, in this respect, the system complies with the appropriations act restriction. By statute, ATF is responsible for enforcing the federal statutes regarding firearms, including those related to the illegal possession, use, transfer, or trafficking of firearms. FRNP was established to provide an investigative service to ATF agents by maintaining a database of firearms suspected of being involved in criminal activity and associated with an ATF criminal investigation. As discussed earlier, the appropriations act restriction does not preclude all information practices and data systems that involve an element of “consolidating or centralizing” FFL records. As designed, the aggregation of firearms transaction records in FRNP is incident to carrying out specific ATF criminal enforcement responsibilities and is limited to that purpose. Therefore, FRNP—when used for the purpose as a database of firearms suspected of being involved in criminal activity and associated with an ATF criminal investigation—complies with the appropriations act restriction. Moreover, based on our analysis of FRNP records, virtually all records in FRNP are associated with an ATF criminal investigation, and thus are related to ATF’s statutory responsibilities. ATF policies for the implementation of FRNP support the conclusion that it complies with the appropriations act restriction, when operated as designed. ATF policies specify that ATF agents may submit a firearm for entry into FRNP if the firearm is associated with an active, nongeneral ATF criminal investigation and meets certain submission criteria. ATF agents must use a designated submission form when requesting that firearms information be entered in the FRNP system, which, among other things, contains a field for the agent to include an active, nongeneral investigation number. The form also contains a field to indicate the additional, specific submission criteria for the firearm, which align with ATF’s statutory responsibility of enforcing criminal statutes related to the illegal possession, use, transfer, or trafficking of firearms. These criteria include: (1) Large quantities of firearms purchased by individual; (2) Firearms suspected in trafficking, but not stolen from an FFL dealer; (3) FFL dealers suspected of performing firearms transactions without proper documentation; (4) Firearms purchased by suspected straw purchasers; and (5) Other—a category that the submitting agent is to explain on the form. According to NTC procedures, and verified by our observations, upon receiving an FRNP submission form, an NTC analyst reviews the form for completeness and conducts several validation and verification steps. For example, the analyst uses ATF’s case-management system to verify that the investigation number on the FRNP submission form is active and that at least one criterion was selected on the submission form. Once the validation and verification checks are complete, the NTC analyst either enters the firearms information into FRNP or contacts the requesting ATF agent if information is missing or not in alignment with the criteria required for FRNP submission. During our review of selected fields for all 41,625 FRNP records, and a generalizable sample of records and submission forms, we found that for the vast majority of firearms entered, ATF abided by its policy for entries to be associated with an active investigation. Out of the entire population of 41,625 records reviewed, less than 1/10 of 1 percent of records were not associated at all with an investigation number and, according to ATF officials, were likely data-entry errors or records entered for testing or training purposes. Moreover, based on our sample review, an estimated 96 percent of FRNP records were entered while the related criminal investigation was open. ATF officials stated that most of the remaining records—entered before the related investigation was open or after it was closed—were the result of data-entry errors or the result of investigation numbers being reopened at a later date. Additional, specific submission criteria were required to be noted on the FRNP submission form since November 2004. Based on our sample review, an estimated 97 percent of FRNP submission forms from November 2004 through July 2015 included the selection of at least one criterion. For an estimated 13 percent of these—or 23 submission forms in our sample—the “Other” criteria was selected, and all but 2 of these had an explanation for why the firearms were entered in FRNP. For example, in 1 submission form that contained an explanation for “Other,” business owners were suspected of selling firearms without a license. ATF officials could not definitively state why an estimated 3 percent of submissions from November 2004 through July 2015 did not contain criteria selection. Officials speculated, for example, that an NTC analyst may have obtained the criteria selection from the requesting agent by phone or e-mail and may not have noted his or her conversation in the FRNP file. However, officials acknowledged that the criteria selection is an important quality control and allows ATF the ability to audit records related to an investigation if necessary. ATF officials told us that only names associated with the criminal investigation are entered in the FRNP system. These names are generally limited to suspects and purchasers, but ATF officials acknowledged that the names of victims or witnesses may be included in the system if they are associated with the criminal investigation, though this does not happen routinely. Based on our observations of FRNP entry procedures, an NTC analyst verifies that any names on the submission form match the names listed in the case-management system for that particular investigation, prior to entering the information in the FRNP system. An ATF regional program conducted from 2007 through 2009 to enter firearms into FRNP—the Southwest Border Secondary Market Weapons of Choice (SWBWOC) Program—did not comply with the appropriations act restriction on consolidating or centralizing FFLs’ firearms records, because the individual firearms were not suspected of being involved in criminal activity associated with an ATF criminal investigation. During the course of our review, ATF reported that it planned to delete the related data from FRNP, and ATF did so in March 2016. According to ATF officials, the SWBWOC Program was in place in ATF’s four southwest border field divisions in order to more effectively identify— during a trace—the purchasers of used firearms trafficked to Mexico. The program was implemented during routine regulatory inspections of FFLs in the region who were engaged primarily in the sale of used firearms—generally pawnbrokers. According to ATF, used firearms sales, referred to as “secondary market” sales, played a significant role in firearms trafficking to Mexico, particularly certain firearms most sought by the Mexican drug cartels, referred to as “weapons of choice.” According to ATF officials, this program was developed to record certain firearms in an effort to enhance ATF’s ability to trace those firearms to a retail purchaser in the event of crime-related recoveries of the firearms. As part of the program, during regulatory inspections, ATF investigators were to record any specified weapons of choice that were found in the FFLs’ inventory or sold or disposed of by the FFLs within the inspection period. According to ATF officials, the information recorded was limited to the serial number and description of the firearm, and was not to include any purchaser information. The firearms information was then submitted to FRNP for all of the used firearms identified during the inspection. If the firearm was subsequently recovered by law enforcement and submitted for a trace, NTC’s automatic checks on the firearm description would result in a match in the FRNP system. ATF would then be able to more quickly identify the FFL pawn shop that previously had the firearm in its inventory. According to ATF officials and documentation, the program was cancelled on October 2, 2009, following ATF’s legal review of the process by which the firearms information entered during the program was recorded and submitted to FRNP. ATF’s legal review determined that the program was not consistent with the appropriations act restriction on consolidation or centralization. According to ATF officials, the program was not reviewed by the ATF Chief Counsel’s office prior to its initiation in June 2007. They stated that the program’s existence was the result of incomplete communication by ATF executives responsible for industry operations programs with ATF’s Chief Counsel prior to the implementation of the program. Upon learning of the program, ATF Counsel determined that FFL information on a firearm, in and of itself—even when unaccompanied by purchaser information—is not permitted to be collected and consolidated without a specific basis in statute or regulation, or a direct nexus to a law enforcement purpose, such as a criminal investigation. The ATF Chief Counsel’s office advised that the program be immediately terminated and, in October 2009, the program was cancelled and the firearms information already entered into FRNP during the program was marked as “Inactive.” We concur with ATF’s assessment that the inclusion of firearms information from the program in FRNP did not comply with the appropriations act restriction. It is our view that information obtained from an FFL about a firearm in and of itself, and unaccompanied by purchaser information, is not permitted to be collected and consolidated within ATF without a specific basis in statute. As a result of our review, ATF officials deleted the records for the affected data from FRNP—855 records relating to 11,693 firearms—in March 2016. A technical defect in eTrace 4.0 allows ATF agents to view and print FRNP data beyond what ATF policy permits. These data include purchaser names and suspect names in a summary format called a Suspect Gun Summary Report. Any ATF agent with eTrace access can view or print these reports, including up to 500 FRNP records at one time. According to ATF officials, the eTrace defect occurred when the contractor developing eTrace 4.0 included a global print function for Suspect Gun Summary Reports—which can contain retail purchaser information—that was accessible from the search results screen. In December 2008, prior to the release of eTrace 4.0 in 2009, ATF provided the contractor with a list of the new system’s technical issues, including this FRNP printing defect. ATF officials explained that because all ATF eTrace users had the appropriate security clearances, and because there would not be a reason for ATF agents to access the Suspect Gun Summary Reports, the print issue was not considered a high-priority concern. However, ATF officials told us that no audit logs or access listings are available to determine how often ATF agents have accessed records containing purchaser information. Therefore, ATF has no assurance that the purchaser information entered in FRNP and accessible through eTrace is not being improperly accessed. eTrace is available to federal, state, and local law enforcement entities that have entered into an eTrace memorandum of understanding with ATF. ATF agents have access to information in eTrace that is unavailable to state and local law enforcement entities, such as FRNP data. However, according to eTrace system documentation, ATF agents are to be limited in their access to FRNP records. Specifically, ATF agents should only be able to view the firearm description and the name and contact information of the ATF case agent associated with the investigation, and not purchaser information or FFL information. If an ATF agent wanted further information about the FRNP data, the agent should have to contact the case agent. ATF officials told us that ATF’s policy is intended to provide FRNP information to ATF agents on a “need-to-know” basis in order to protect the security of ATF investigations, and protect gun owner information. Moreover, federal internal control standards specify that control activities to limit user access to information technology include restricting authorized users to the applications or functions commensurate with assigned responsibilities. According to ATF officials, options are limited for resolving the global print function defect. ATF’s contract with the eTrace 4.0 developer has ended, and therefore ATF cannot contact the developer to fix the printing issue. ATF could have the issue resolved when a new version of eTrace, version 5.0, is released, but there is no timeline for the rollout of eTrace 5.0. ATF officials told us that, in the short term, one method to fix the printing issue would be to remove individuals’ names and identifying information from the FRNP system, so it is not available for Suspect Gun Summary Reports. The firearms information and case agent information would remain available to all ATF agents, and ATF officials indicated that they did not think that removing the identifying information would hamper ATF agents’ investigations. Developing and implementing short-term and long-term mechanisms to align the eTrace system capability with existing ATF policy to limit access to purchaser information for ATF agents could ensure that firearms purchaser information remains limited to those with a need to know. MS complies with the appropriations act restriction; however, ATF lacks consistency among its MS deletion policy, system design, and policy implementation timing. Since we reported on MS in 1996, ATF has made minimal changes to the system itself, but the information contained in MS has changed with the inclusion of Demand Letter 3 reports, in addition to multiple sales reports. Multiple sales reports. By statute, FFLs are required to provide to ATF a multiple sales report whenever the FFL sells or otherwise disposes of, within any 5 consecutive business days, two or more pistols or revolvers, to an unlicensed person. The reports provide a means of monitoring and deterring illegal interstate commerce in pistols and revolvers by unlicensed persons. ATF’s maintenance of multiple sales reports in MS complies with the appropriations act restriction because of ATF’s statutory authority related to multiple sales reports, and the lack of significant changes to the maintenance of multiple sales reports in MS since we found it to be in compliance in 1996. As we reported in 1996, ATF operates MS with specific statutory authority to collect multiple sales reports. In 1975, under the authority of the Gun Control Act of 1968, ATF first issued regulations requiring FFLs to prepare multiple sales reports and submit those reports to ATF. The legislative history related to ATF’s fiscal year 1979 appropriations act restriction did not provide any indication that Congress intended a change in ATF’s existing practice. In 1986, a provision of FOPA codified FFLs’ regulatory reporting requirement, affirming ATF’s authority to collect multiple sales reports. In addition, this provision required, among other things, FFLs to forward multiple sales reports to the office specified by ATF. Therefore, under this provision, ATF was given the statutory authority to specify that FFLs forward multiple sales reports to a central location. In our 1996 report, we examined MS and found that it did not violate the prohibition on the consolidation or centralization of firearms records because ATF’s collection and maintenance of records was incident to its specific statutory responsibility. As we noted at that time, multiple sales reports are retrievable by firearms and purchaser information, such as serial number and purchaser name. We did not identify any significant changes to the maintenance of the multiple sales reports since we last reported on ATF’s compliance with the statutory restriction that would support a different conclusion in connection with this review. Demand Letter 3 reports. In 2011, in an effort to reduce gun trafficking from the United States to Mexico, ATF issued demand letters to FFLs classified as dealers or pawnbrokers in four southwest border states: Arizona, California, New Mexico, and Texas. The letter, referred to as Demand Letter 3, required these FFLs to submit a report to ATF on the sale or other disposition of two or more of a specific type of semiautomatic rifle, at one time or during any 5 consecutive business days, to an unlicensed person. Federal courts that have considered the issue have held that ATF’s collection of Demand Letter 3 reports are consistent with the appropriations act restriction. It is our view that ATF’s maintenance of Demand Letter 3 reports in MS is consistent with the appropriations act restriction in light of the statutory basis for Demand Letter 3, the courts’ decisions, and the way in which the records are maintained. ATF has specific statutory authority to collect reports like Demand Letter 3 reports. As discussed, FFLs are required to maintain certain firearms records at their places of business. By statute, FFLs may be issued letters requiring them to provide their record information or any portion of information required to be maintained by the Gun Control Act of 1968, as amended, for periods and at times specified by the letter. Some FFLs have challenged the legality of Demand Letter 3 reports for a number of reasons, including that it did not comply with the appropriations act restriction. Federal courts that have considered the issue have upheld ATF’s use of Demand Letter 3 as consistent with the appropriations act restriction. In one case before the U.S. Court of Appeals for the Tenth Circuit, the FFL contended that the demand letter created a national firearms registry in violation of the restriction on consolidation or centralization. The Tenth Circuit stated that the plain meaning of “consolidating or centralizing” does not prohibit the mere collection of some limited information. The court went on to state that the July 2011 demand letter requested very specific information from a limited segment of FFLs. In addition, the court pointed out that Congress authorized the issuance of the letters in 1986, after passing the first appropriations act restriction, and Congress could not have intended to authorize the record collection in statute while simultaneously prohibiting it in ATF’s annual appropriations act. In other similar cases, the courts have also held that ATF had the authority to issue the demand letter and that ATF’s issuance of the demand letter complied with the appropriations act restriction. In addition, Demand Letter 3 reports are maintained in MS in an identical manner to multiple sales reports. Although not required by statute, ATF policy requires that firearms purchaser names be deleted from MS 2 years after the date of the reports, if the firearm has not been connected to a firearms trace. However, ATF’s method to identify records for deletion is not comprehensive and, therefore, 10,041 names that should have been deleted remained in MS until May 2016. According to ATF officials, because of MS system design limitations, analysts must write complex queries to locate such names in MS. For example, since the information needed to identify the correct records could exist in free-form fields, the success of the queries in comprehensively identifying all appropriate records depends on consistent data entry of several text phrases throughout the history of the system. In addition, ATF’s queries have inconsistently aligned with its system design—for instance, as the system was modified and updated, the query text remained aligned with the outdated system—and therefore these queries resulted in incomplete identification of records to be deleted. Changes to MS to address system query limitations would require a system-wide database enhancement, but there is currently not an operations and maintenance support contract in place for this system. Moreover, even if the system could ensure that deletions capture all required records, ATF has inconsistently adhered to the timetable of deletions required by its policy. For example, according to ATF’s deletion log and our verification of the log, some records entered in 1997 were not deleted until November 2009—about 10 years after the required 2 years. As shown in table 1 below, ATF’s timing for implementing deletions did not adhere to ATF policy directives. As shown in table 1 below, the ATF deletion policy for MS has changed over time including variations in the frequency of deletions (e.g., annually, monthly, weekly), and pauses to the deletion policy because of, according to ATF officials, litigation and requests from Congress. According to NTC officials, delayed deletions occurred because deleting a large number of records at once negatively affects the system, slowing system response time or stopping entirely the larger related data system. However, according to NTC’s deletion log and verified by our observations of NTC system queries, deletions were conducted in average increments of almost 100,000 records per day—representing on average a full year’s worth of records to be deleted. In addition, ATF confirmed that a single deletion of 290,942 records on one day in January 2011 did not affect the system. Therefore, system constraints do not seem to be the reason for the delayed deletion. ATF did not identify further causes for the delays in deletions. ATF reported that the objective for its deletion policy was primarily to delete data that may not be useful because of its age and to safeguard privacy concerns related to retaining firearms purchaser data. Federal internal control standards require control activities to help ensure that management’s directives are carried out. Additionally, information systems and related control activities should be designed to achieve objectives and respond to risks. Specifically, an organization’s information system should be designed by considering the processes for which the information system will be used. For example, to alleviate the risk of not meeting the objectives established through the MS deletion policy, ATF must ensure the policy is consistent with the design of the MS data system and ATF must ensure that it meets the policy’s timeline requirements. In September 1996, we reported that ATF had not fully implemented its 2-year deletion requirement. During the course of our 1996 review, ATF provided documentation that it had subsequently deleted the required records and that it would conduct weekly deletions in the future. Similarly, as a result of our current review, according to ATF documentation, in May 2016, the agency deleted the 10,041 records that should have been deleted earlier. However, given that this has been a 20-year issue, it is critical that ATF develop consistency between its deletion policy, the design of the MS system, and the timeliness with which deletions are carried out. By aligning the MS system design and the timeliness of deletion practices with its policy, ATF could ensure that it maintains only useful purchaser information while safeguarding the privacy of firearms purchasers. ATF has an important role in combatting the illegal use of firearms, and must balance this with protecting the privacy rights of law-abiding firearms owners. Of the four ATF firearms data systems we reviewed that contained firearms purchaser information, we found that certain aspects of two of these systems violated the appropriations act restriction on consolidating or centralizing FFL firearms records, but ATF resolved these issues during the course of our review. With regard to ATF policies on maintenance of firearms records, ATF should do more to ensure that these policies are followed and that they are clearly communicated. Specifically, providing guidance to industry members participating in A2K for how to submit their records when they go out of business would help ensure they submit required records to ATF. Without this clear guidance, ATF risks not being able to locate the first purchaser of a firearm during a trace, and thus may not be able to fulfill part of its mission. In addition, aligning eTrace system capability with ATF policy to limit access to firearms purchaser information in FRNP would ensure that such information is only provided to those with a need to know. Finally, aligning the MS system design and the timeliness of deletion practices with the MS deletion policy would help ATF maintain only useful purchaser data and safeguard the privacy of firearms purchasers. In order to help ensure that ATF adheres to its policies and facilitates industry compliance with requirements, we recommend that the Deputy Director of ATF take the following three actions: provide guidance to FFLs participating in A2K for provision of out-of- business records to ATF, so that FFLs can better ensure that they are in compliance with statutory and regulatory requirements; develop and implement short-term and long-term mechanisms to align the eTrace system capability with existing ATF policy to limit access to FRNP purchaser information for ATF agents; and align the MS deletion policy, MS system design, and the timeliness of deletion practices to improve ATF’s compliance with the policy. We provided a draft of this report to ATF and DOJ on May 25, 2016 for review and comment. On June 16, 2016, ATF provided an email response, stating that the agency concurs with all three of our recommendations and is taking several actions to address them. ATF concurred with our recommendation that ATF provide guidance to FFLs participating in A2K for provision of out-of-business records to ATF. ATF stated that the agency is modifying its standard Memorandum of Understanding with A2K participants to incorporate specific guidance regarding the procedures to be followed when a participant goes out of business. ATF also stated that, as a condition of participation, all current and future A2K participants will be required to adopt the revised Memorandum of Understanding. The implementation of such guidance in the Memorandum of Understanding for A2K participants should meet the intent of our recommendation. ATF concurred with our recommendation that ATF develop and implement mechanisms to align the eTrace system capability with existing ATF policy to limit access to FRNP purchaser information for ATF agents. ATF stated that, in the short term, the agency will delete all purchaser information associated with a firearm entered into FRNP, and will no longer enter any purchaser information into FRNP. ATF stated that, in the long term, the agency will modify the Firearms Tracing System to remove the purchaser information fields from the FRNP module, and will modify eTrace as necessary to reflect this change. These short- and long-term plans, if fully implemented, should meet the intent of our recommendation. ATF concurred with our recommendation that ATF align the MS deletion policy, MS system design, and the timeliness of deletion practices to improve ATF’s compliance with the policy. As we reported above, ATF stated that the agency deleted all purchaser names from MS that should have been deleted earlier. ATF also stated that the agency is implementing protocols to ensure that deleting purchaser names from MS aligns with ATF policy. If such protocols can be consistently implemented in future years, and address both the timeliness of deletions and the comprehensive identification of records for deletion, they should meet the intent of our recommendation. On June 22, 2016, DOJ requested additional time for its Justice Management Division to review our conclusions regarding ATF’s compliance with the appropriations act restriction and the Antideficiency Act. As noted earlier, we solicited ATF’s interpretation of the restriction on consolidation or centralization of records as applied to each of the systems under review by letter of December 21, 2015, consistent with our standard procedures for the preparation of legal opinions. ATF responded to our inquiry on January 27, 2016, and its views are reflected in the report. Nevertheless, DOJ stated that ATF and DOJ officials had not followed DOJ’s own processes regarding potential violations of the Antideficiency Act, specifically promptly informing the Assistant Attorney General for Administration. As a result, DOJ requested additional time to review the appropriations law issues raised by the draft report. As explained in appendix VII, ATF’s failure to comply with the prohibition on the consolidation or centralization of firearms records violated the Antideficiency Act, which requires the agency head to submit a report to the President, Congress, and the Comptroller General. The Office of Management and Budget (OMB) has published requirements for executive agencies for reporting Antideficiency Act violations in Circular A-11, and has advised executive agencies to report violations found by GAO. OMB has further advised that “f the agency does not agree that a violation has occurred, the report to the President, Congress, and the Comptroller General will explain the agency’s position.” We believe that the process set forth by OMB affords DOJ the opportunity to consider and express its views. ATF also provided us written technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Deputy Director of ATF, the Attorney General of the United States, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Diana C. Maurer at (202) 512-9627 or [email protected], or Helen T. Desaulniers at (202) 512-4740 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. This report addresses the following objectives: 1. Identify the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) data systems that contain retail firearms purchaser data and describe the characteristics of selected systems. 2. Determine whether selected ATF data systems comply with the appropriations act restriction on consolidation or centralization of firearms records and ATF policies. To calculate the estimated number of firearms in the United States in 2013, we used data from ATF’s February 2000 report on Commerce in Firearms in the United States and ATF’s 2015 Annual Statistical Update to this report. To calculate the approximate number of murders in which firearms were involved in 2014, we used data from the Federal Bureau of Investigation’s Uniform Crime Reports from 2014. To address the first objective, we reviewed ATF policy and program documents to identify ATF data systems related to firearms. For the purposes of this report, “data systems” or “systems” refers to ATF’s data systems and system components, including what ATF refers to as “modules” of a larger system, and what ATF refers to as “programs” whose associated data are contained within related systems. These policy and program documents included, among other things, ATF orders, system descriptions, system user manuals, system training materials, and data submission forms. We compared this information to the systems identified in our September 1996 report, and conducted searches of publicly available information to develop a comprehensive and current list of systems. In order to identify the systems and better understand them and their contents, we spoke with ATF officials in headquarters and at ATF’s National Tracing Center (NTC). We also discussed these systems with ATF investigative and regulatory officials in the Baltimore and Los Angeles field offices, who provided varying perspectives due to geographic factors. These actions enabled us to confirm a comprehensive list of systems, and determine the presence of retail purchaser information within these systems. We selected four systems for a more in-depth review: Out-of-Business Records Imaging System (OBRIS), Access 2000 (A2K), Firearm Recovery Notification Program (FRNP), and Multiple Sales (MS). Selected systems, at a minimum, contained retail purchaser information and contained original records—as opposed to systems that transmitted information, such as a system that only pulls data from another system in order to print a report or fill out a form. A system was more likely to be selected if (1) it contained data unrelated to a criminal investigation, (2) a large percentage of system records contained retail purchaser information, (3) the retail purchaser information was searchable, or (4) ATF initiated the system—as opposed to ATF being statutorily required to maintain the system. See table 2 for more details. For the selected systems, we reviewed ATF data on the number of system records, among other things—for OBRIS and A2K for fiscal year 2015, and for FRNP and MS from fiscal years 2010 through 2015. We assessed the reliability of these data by interviewing ATF staff responsible for managing the data and reviewing relevant documentation, and concluded that these data were sufficiently reliable for the purposes of our report. We reviewed ATF policy and program documents to obtain in-depth descriptions of these selected systems, and discussed these systems with ATF officials. We visited NTC to observe the selected systems in operation. To address the second objective, we reviewed relevant laws, including statutory data restrictions, and ATF policy and program documents relating to ATF’s firearms tracing operations and the selected systems. We also solicited the agency’s interpretation of the restriction on consolidation or centralization of records as applied to each of the systems, and interviewed ATF officials regarding the data systems’ compliance with that restriction and ATF policies. We visited NTC to observe how selected systems’ data are collected, used, and stored. For OBRIS, A2K, FRNP, and MS, we observed NTC analysts using the systems during firearms traces and observed the extent to which the systems are searchable for retail purchaser information. For OBRIS, FRNP, and MS, we observed NTC analysts receiving and entering data into the systems and processing the original data submissions—either electronically or through scanning and saving documents—including quality-control checks. For A2K, we reviewed budgetary information to determine the source of funding for the system for fiscal year 2008 through fiscal year 2014. We also interviewed representatives from the contractor that manages A2K, and 3 of 35 industry members that use A2K, to better understand how the system functions. We selected industry members that had several years of experience using A2K and reflected variation in federal firearms licensee (FFL) size and type. Although our interviews with these industry members are not generalizable, they provided us with insight on the firearms industry’s use of A2K. In order to evaluate the contents of FRNP for the presence of retail purchaser information and compliance with the appropriations act restriction and FRNP policies, we reviewed several fields of data for the entire population of records. During our site visit, we also reviewed additional fields of data for a generalizable sample of records and the associated submission forms that are used to populate the records. For this sample, we compared selected data in the system to information on the forms, and collected information from the forms. We drew a stratified random probability sample of 434 records from a total population of 41,625 FRNP records entered from June 1991 through July 2015. With this probability sample, each member of the study population had a nonzero probability of being included, and that probability could be computed for any member. We stratified the population by active/inactive record status and new/old (based on a cutoff of Nov. 1, 2004). Each sample element was subsequently weighted in the analysis to account statistically for all the records, including those that were not selected. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. All percentage estimates from the review of the generalizable sample of FRNP records have margins of error at the 95 percent confidence level of plus or minus 5 percentage points or less, unless otherwise noted. For our review of the submission forms associated with FRNP records, we reviewed 195 forms entered into FRNP from November 2004 through July 2015 that were sampled from the “new” stratum. Prior to November 2004, the submission forms did not include selection options for criteria for entry into FRNP. We therefore only reviewed the more recent forms in order to assess the presence of criteria on these forms. Our review of these forms is generalizable to submission forms entered into FRNP from November 2004 through July 2015. All percentage estimates from the review of submission forms have margins of error at the 95 percent confidence level of plus or minus 3 percentage points or less, unless otherwise noted. We assessed the reliability of the FRNP data by conducting electronic tests of the data for obvious errors and anomalies, interviewing staff responsible for managing the data, and reviewing relevant documentation, and concluded that these data were sufficiently reliable for the purposes of our report. For MS, we observed the process of querying to identify particular records. We determined the selected data systems’ compliance with the appropriations act restriction, and compared them to multiple ATF policies on collection and maintenance of information, and criteria in Standards for Internal Control in the Federal Government related to control activities for communication and for the access to and design of information systems. We conducted this performance audit from January 2015 to June 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Data sources FFLs send reports to NTC on a specified form (ATF Form 3310.4) Contents related to firearms purchaser information Firearms information (e.g., serial number, model), retail purchaser information (e.g., name, date of birth); FFL information (e.g., FFL name, FFL number) Who can view the information About 396 ATF Firearms Tracing System (FTS) users, primarily NTC personnel, and the 3,050 ATF users, which includes ATF agents. ATF eTrace users outside of NTC are generally to be limited to viewing firearms and requesting agent information. Exports information to eTrace; FIRES; FTS (Data related to MS are contained in FTS.) Out-of-business FFLs send firearms transaction records to NTC, specifically acquisition and disposition logbooks and a specified form (ATF Form 4473) Contents related to firearms purchaser information Retail purchaser information of prohibited individuals who attempted to purchase a firearm (e.g., name); firearms information (e.g., serial number, model) Imports information from Federal Licensing System (FLS) National Tracing Center (NTC) Firearms information (e.g., serial number, model), retail purchaser information (e.g., name, date of birth); federal firearms licensee (FFL) information (e.g., FFL name, FFL number) ATF employees; federal, state, local, and foreign law enforcement agencies. Non-ATF users have access to information on their own trace requests and those from agencies with which they have a memorandum of understanding. Firearms information (e.g., serial number, model), retail purchaser information (e.g., name, address); FFL information (e.g., FFL name, FFL number) Firearms information (e.g., serial number, model); retail purchaser, possessor, and associates information (e.g., first and last name); FFL information (e.g., city and state) Contents related to firearms purchaser information Firearms information (e.g., serial number, model), retail purchaser information (e.g., name, date of birth); FFL information (e.g., FFL name, FFL number) Firearms information (e.g., serial number, model), retail purchaser information (e.g. name); FFL information (e.g., FFL name, FFL number) eTrace; FIRES; FTS (Data related to Interstate Theft are contained in FTS.) Firearms information (e.g., serial number, model), retail purchaser information (e.g., name, date of birth); FFL information (e.g., FFL name, FFL number). Original and subsequent purchasers are maintained as part of the system. FLS; National Firearms Act Special Occupational Tax System (NSOT) Contents related to firearms purchaser information Firearms information (e.g., serial number, model). Firearms possessor information—limited to first, middle, and last name—but that information is not searchable. Firearms information (e.g., serial number, model); personal information for individuals including possessors, legal owners, or individuals who recovered the firearm (e.g., first and last name) Collects information related to an individual currently under active criminal investigation who is suspected of illegally using or trafficking firearms. Suspect information (e.g., name, identification numbers such as driver’s license number) Contents related to firearms purchaser information Firearms information (e.g., serial number, model), retail purchaser information (e.g., name, date of birth); FFL information (e.g., FFL name, FFL number) Who can view the information ATF employees; federal, state, local, and foreign law enforcement agencies. Federal, state, local, and foreign law enforcement agencies only have access to information on their own trace requests and those from agencies with which they have a memorandum of understanding. Exports information to Electronic Trace Operation Workflow Reporting System; eTrace; FIRES; FTS (Data related to Trace are contained in FTS.) Under the Brady Handgun Violence Prevention Act, Pub. L. No. 103-159, 107 Stat. 1536 (1993), and implementing regulations, the Federal Bureau of Investigation, within DOJ, and designated state and local criminal justice agencies use NICS to conduct background checks on individuals seeking to purchase firearms from FFLs or obtain permits to possess, acquire, or carry firearms. NICS was established in 1998. FTS does not contain original records, rather it imports data from its subsystems in order to conduct analysis. NFRTR contains firearms purchaser information pursuant to Title 26 of the IRS code, 26 U.S.C. Chapter 53, regarding the registration and transfers of registration taxes. Specifically, it states that there should be a central registry, called the National Firearms Registration and Transfer Record, of all firearms as defined in the code, including machine guns, destructive devices such as bazookas and mortars, and “other” “gadget-type” weapons such as firearms made to resemble pens. Appendix III: Out-of-Business Records Imaging System (OBRIS) Since 1968, the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) has received several hundred million out-of-business records. According to ATF officials, as of May 5, 2016 there are about 8,060 boxes of paper records at the National Tracing Center (NTC) awaiting scanning into digital images before they are to be destroyed. At NTC, we observed these boxes lining the walls and stacked along cubicles and file cabinets, as shown in figure 4. The officials stated that, according to the General Services Administration, the facility floor will collapse if the number of boxes in the building increases to 10,000. Therefore, when the number of boxes approaches this quantity, NTC staff move the boxes to large shipping containers outside. Currently, there are three containers of boxes on the property, which contain records awaiting destruction. Prior to digital imaging, records were housed on microfilm or in storage boxes, and the system was referred to simply as Microfilm Retrieval System. According to NTC officials, ATF is transitioning to digital imaging because of the benefits of improved image resolution, speed in accessing images, simultaneous accessibility of images to complete urgent traces, and less voluminous storage. The digitized records also helped mitigate the challenges of deteriorating microfilm images and maintaining the obsolete technology of microfilm. According to officials, NTC has completed the process of converting the microfilm records to digital images, and officials expect that the images will become fully available to NTC analysts for tracing during fiscal year 2016. Currently, access is limited to a single workstation within NTC. While ATF finalizes this effort, staff continue to access the records in the NTC microfilm archive in order to respond to trace requests, as shown in figure 5. Before fiscal year 1991, ATF stored the out-of-business records in boxes with an NTC file number assigned to each federal firearms licensee (FFL). If, during a trace, ATF determined that the FFL who sold the firearm was out of business and had sent in its records, ATF employees were to locate the boxes containing the records and manually search them for the appropriate serial number. According to ATF, this was a time-consuming and labor-intensive process, which also created storage problems. In 1991, ATF began a major project to microfilm the out-of- business records and destroy the originals. Instead of in boxes, the out- of-business records were stored on microfilm cartridges, with the FFL numbers assigned to them. Although this system occupied much less space than the hard copies of the records, ATF officials said it was still time-consuming to conduct firearms traces because employees had to examine up to 3,000 images on each microfilm cartridge to locate a record. The officials stated that scanning records and creating digital images in OBRIS has sped up the ability to search for out-of-business records during a trace. According to the officials, it takes roughly 20 minutes to complete a trace with digital images and roughly 45 minutes using microfilm. A provision in the fiscal year 2012 appropriation for the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) prohibits the use of the appropriation to consolidate or centralize records on the acquisition and disposition of firearms maintained by federal firearms licensees (FFL). This statutory restriction originated in the agency’s appropriation for fiscal year 1979 and, with some modification, was made permanent in fiscal year 2012. We reviewed whether ATF’s collection and maintenance of acquisition and disposition records in four data systems—Out-of-Business Records Imaging System (OBRIS), Access 2000 (A2K), Firearm Recovery Notification Program (FRNP), and Multiple Sales (MS)— violated this restriction. As discussed below, we considered the critical characteristics of each data system and related ATF activities in light of the restriction and in the context of ATF’s statutory authorities. We conclude that ATF violated the restriction when it collected and maintained the disposition records of FFL participants in A2K on a single server within the National Tracing Center (NTC) after those FFLs had discontinued their operations. We also agree with ATF’s 2009 determination that the agency violated the restriction when it collected and maintained records of certain FFLs engaged primarily in the sale of used firearms as part of FRNP. ATF’s failure to comply with the restriction on consolidation or centralization also violated the Antideficiency Act. Under section 1351 of title 31, United States Code, the agency is required to report these violations to the President and Congress. ATF, a criminal and regulatory enforcement agency within the Department of Justice (DOJ), is responsible for the regulation of the firearms industry and enforcement of federal statutes regarding firearms, including criminal statutes related to the illegal possession, use, transfer, or trafficking of firearms. One component of ATF’s criminal enforcement mission involves the tracing of firearms used in crimes to identify the first retail purchaser of a firearm from an FFL. To conduct a trace, the requesting law enforcement agency must identify the manufacturer or importer of the firearm and its type, caliber, and serial number, as well as other information related to the recovery, crime, and possessor. According to ATF, NTC personnel must typically use the information provided by the law enforcement agency to contact the manufacturer or importer to determine when and to whom the firearm in question was sold. The manufacturer or importer may have sold the firearm to an FFL wholesaler. In that case, NTC personnel would contact the FFL wholesaler to determine when and to whom the firearm in question was sold, usually to an FFL retailer. The tracing process continues until NTC identifies the first retail purchaser who is a nonlicensee. The Gun Control Act of 1968, as amended, established a system requiring FFLs to record firearms transactions, maintain that information at their business premises, and make such records available to ATF for inspection and search under certain prescribed circumstances. This system was intended to permit law enforcement officials to trace firearms involved in crimes as described above while allowing the records themselves to be maintained by the FFLs rather than by a governmental entity. As originally enacted, the Gun Control Act required FFLs to submit such reports and information as the Secretary of the Treasury prescribed by regulation and authorized the Secretary to prescribe such rules and regulations as deemed reasonably necessary to carry out the provisions of the act. In 1978, citing the general authorities contained in the Gun Control Act, ATF proposed regulations that would have required FFLs to report most of their firearms transactions to ATF through quarterly reports. Under the proposed regulations, these FFL reports of sales and other dispositions would not have identified a nonlicensed transferee, such as a retail purchaser, by name and address. However, the proposed regulations prompted concerns from those who believed that the reporting requirements would lead to the establishment of a system of firearms registration. Congress included in ATF’s fiscal year 1979 appropriation for salaries and expenses a provision prohibiting the use of funds for administrative expenses for the consolidation or centralization of certain FFL records, or the final issuance of the 1978 proposed regulations. The provision continues to apply, with some modifications as described below. hat no funds appropriated herein shall be available for administrative expenses in connection with consolidating or centralizing within the Department of the Treasury the records of receipt and disposition of firearms maintained by Federal firearms licensees or for issuing or carrying out any provisions of the proposed rules of the Department of the Treasury, Bureau of Alcohol, Tobacco and Firearms, on Firearms Regulations, as published in the Federal Register, volume 43, number 55, of March 21, 1978. The Bureau of Alcohol, Tobacco, and Firearms (BATF) has proposed implementation of several new regulations regarding firearms. The proposed regulations, as published in the Federal Register of March 21, 1978 would require: (1) A unique serial number on each gun manufactured or imported into the United States. (2) Reporting of all thefts and losses of guns by manufacturers, wholesalers and dealers. (3) Reporting of all commercial transactions involving guns between manufacturers, wholesalers and dealers. The Bureau would establish a centralized computer data bank to store the above information. It is important to note that the proposed regulations would create a central Federal computer record of commercial transactions involving all firearms—whether shotguns, rifles, or handguns. There are approximately 168,000 federally licensed firearms dealers, manufacturers, and importers. It is estimated that the proposed regulations would require submission of 700,000 reports annually involving 25 million to 45 million transactions. It is the view of the Committee that the proposed regulations go beyond the intent of Congress when it passed the Gun Control Act of 1968. It would appear that BATF and the Department of Treasury are attempting to exceed their statutory authority and accomplish by regulation that which Congress has declined to legislate. The reference to the 1978 proposed rules was removed from the annual provision as of the fiscal year 1994 appropriations act, but the prohibition against using funds for administrative expenses for consolidating or centralizing records was included in each of ATF’s annual appropriations through fiscal year 2012 in much the same form. In fiscal year 1994, the Treasury, Postal Service, and General Government Appropriations Act, 1994, expanded the prohibition to include the consolidation or centralization of portions of records and to apply to the use of funds for salaries as well as administrative expenses, stating “hat no funds appropriated herein shall be available for salaries or administrative expenses in connection with consolidating or centralizing, within the Department of the Treasury, the records, or any portion thereof, of acquisition and disposition of firearms maintained by Federal firearms licensees” (emphasis added). “hat no funds appropriated herein or hereafter shall be available for salaries or administrative expenses in connection with consolidating or centralizing, within the Department of Justice, the records, or any portion thereof, of acquisition and disposition of firearms maintained by Federal firearms licensees” (emphasis added). The conference report accompanying the act explained that the provision had been made permanent. We previously considered ATF’s compliance with the restriction on consolidation or centralization in 1996 in connection with the agency’s Microfilm Retrieval System and Multiple Sales System. We stated that the restriction did not preclude all information practices and data systems that involved an element of consolidation or centralization, but that it had to be interpreted in light of its purpose and in the context of other statutory provisions governing ATF’s acquisition and use of information on firearms. In this respect, our analyses reflected the well-established principle that statutory provisions should be construed harmoniously so as to give them maximum effect whenever possible, avoiding the conclusion that one statute implicitly repealed another in the absence of clear evidence to the contrary. We found that the two systems complied with the statutory restriction on the grounds that ATF’s consolidation of records was incident to carrying out specific responsibilities set forth in the Gun Control Act of 1968, as amended, and that the systems did not aggregate data on firearms transactions in a manner that went beyond these purposes. Thus, our analysis did not turn on the presence or absence of retail purchaser information in the system, but rather on the extent to which the aggregation of data corresponded to a statutory purpose. We employ a similar analytical approach, which ATF has also adopted, in assessing the four systems under review here, taking into account ATF’s statutory authorities and the critical characteristics of each system. Two of the four data systems we reviewed—OBRIS and MS—do not consolidate or centralize firearms in violation of the restriction contained in the fiscal year 2012 appropriations act. In contrast, ATF violated the restriction when it collected and maintained disposition records of FFL participants in A2K on a single server at NTC after they had discontinued their operations. ATF also violated the restriction when it collected and maintained records of certain FFLs engaged primarily in the sale of used firearms as part of FRNP. OBRIS is ATF’s repository for records submitted by FFLs that have permanently discontinued their operations, as required by the Gun Control Act of 1968, as amended. Section 923(g)(1)(A) of title 18, United States Code, requires each FFL to maintain such records of importation, production, shipment, receipt, sale, or other disposition of firearms at its place of business as prescribed by the Attorney General. Under 18 U.S.C. § 923(g)(4), when a firearms business is discontinued and there is no successor, the records required to be maintained by FFLs must be delivered within 30 days to ATF. ATF’s system for maintaining the records of out-of-business FFLs for its statutory tracing function has evolved over time in response to logistical challenges and technological advances. Prior to fiscal year 1991, ATF maintained out-of-business FFLs’ records in hard copy, with a file number assigned to each FFL. During a trace, if ATF determined that a firearm had been transferred or disposed of by an out-of-business FFL, ATF employees manually searched the FFL’s records until they found the records corresponding to the serial number of the firearm being traced. According to ATF, this was a time-consuming and labor-intensive process, and the volume of records created storage problems. In 1991, ATF began a major project to microfilm these records and destroy the originals. For fiscal year 1992, Congress appropriated $650,000 “solely for improvement of information retrieval systems at the National Firearms Tracing Center.” In fiscal year 1992, ATF began creating a computerized index of the microfilmed records containing the information necessary to identify whether ATF had a record relating to a firearm being traced. The index contained the following information: (1) the cartridge number of the microfilm; (2) an index number; (3) the serial number of the firearm; (4) the FFL number; and (5) the type of document on microfilm, i.e., a Firearms Transaction Record form or acquisition and disposition logbook pages. This information was stored on a database in ATF’s mainframe computer to allow searches. Other information, however, including a firearms purchaser’s name or other identifying information and the manufacturer, type, and model remained stored on microfilm cartridges and was not computerized. Therefore, this information was not accessible to ATF personnel through a text search. In our 1996 report, we concluded that the Microfilm Retrieval System did not violate the restriction on consolidation or centralization due to its statutory underpinnings and design. ATF had initially required out-of- business FFLs to deliver their records to ATF through a 1968 regulation. We found no indication in its legislative history that the appropriations act restriction was intended to overturn this regulation and noted that, historically, out-of-business records had been maintained at a central location. We also explained that the Firearms Owners’ Protection Act of 1986 (FOPA) had codified the ATF regulation, affirming the agency’s authority to collect this information, and that a subsequent appropriations act had provided funding specifically for ATF’s microfilming effort. Finally, ATF’s system of microfilmed records did not capture and store certain key information, such as firearms purchaser information, in an automated file. In this regard, we found that the system did not aggregate information in a manner beyond that necessary to implement the Gun Control Act of 1968, as amended by FOPA. Conversion of Records.—The conferees recognize the need for the ATF to begin converting tens of thousands of existing records of out-of-business Federal firearms dealers from film to digital images at the National Tracing Center. Once the out-of- business records are fully converted, the search time for these records will be reduced to an average of 5 minutes per search from the current average of 45 minutes per search. This significant time saving will ultimately reduce overall costs and increase efficiency at the National Tracing Center. Therefore, the conference agreement includes a $4,200,000 increase for the ATF to hire additional contract personnel to begin this conversion. Similarly, the conference report accompanying the fiscal year 2006 appropriations act reflected the conferees’ support for ATF’s transition of out-of-business records to OBRIS. Since 2006, NTC has converted records submitted by FFLs discontinuing their operations to digital images in OBRIS. Specifically, NTC sorts and scans records provided by out-of-business FFLs, converting and storing them in an image repository on an electronic server. Images stored in OBRIS are generally indexed by FFL number. The records themselves are stored as images without optical character recognition so that they cannot be searched or retrieved using text queries, but must be searched through the index, generally by FFL number. After narrowing down the possible records through an index search, an NTC analyst must manually scroll through digital images to identify the record of the particular firearm in question. The technological changes represented by OBRIS do not compel a different conclusion regarding ATF’s compliance with the restriction on consolidation or centralization from the one we reached in 1996 with respect to the predecessor system. The statutory basis for OBRIS is the same as for the Microfilm Retrieval System and OBRIS makes records accessible to the same extent as that system, functioning in essentially the same manner though with enhanced technology. As with the prior microfilm system, users identify potentially relevant individual records through manual review after searching an index using an FFL number, or firearms information if available. In this regard, OBRIS, like its predecessor, does not aggregate records in a manner beyond that required to implement the Gun Control Act of 1968, as amended by FOPA. We assessed A2K with regard to in-business records and out-of-business records. We conclude that A2K for in-business records complies with the restriction on consolidation or centralization, while A2K for out-of- business records violated the restriction. The Gun Control Act of 1968, as amended, requires FFLs to provide firearms disposition information to ATF in response to a trace request. Specifically, section 923(g)(7) of title 18, United States Code, requires FFLs to respond within 24 hours to a request for records to determine the disposition of firearms in the course of a criminal investigation. Prior to the implementation of A2K, FFLs could only respond to such requests manually. A2K provides manufacturer, importer, and wholesaler FFLs with an automated alternative to facilitate their statutorily required response to ATF requests. he conferees are aware that the Access 2000 program was initiated by ATF to improve the efficiency and reduce the costs associated with firearms tracing incurred by Federal Firearms Licensees (FFLs). ATF and FFL importers, manufacturers, and wholesalers form a partnership in this effort. FFLs take their data from their mainframe computer and import it into a stand-alone server provided by the ATF. The National Tracing Center is connected to this server remotely by secure dial-up and obtains information on a firearm that is subject to a firearms trace. The conferees support this program, which reduces the administrative burdens of the FFL and allows the ATF around the clock access to the records. The ATF currently has 36 Access 2000 partners. The conferees encourage the ATF to place more emphasis on this program and expand the number of partners to the greatest extent possible. According to ATF, as of April 25, 2016, there are 35 industry members representing 66 individual manufacturer, importer, and wholesaler FFLs currently participating in A2K. ATF believes that A2K “… has appropriately balanced Congressional concerns related to the consolidation of firearm records with the necessity of being able to access firearm information in support of its underlying mission to enforce the Gun Control Act,” as amended. We agree. Given the statutory underpinning and features of the system for in-business FFLs, we conclude that ATF’s use of A2K for in-business records does not violate the restriction on the consolidation or centralization of firearms records. ATF’s use of A2K for in-business records is rooted in the specific statutory requirement that FFLs respond promptly to ATF trace requests in connection with criminal investigations. In addition, although the system allows FFLs to respond to ATF’s trace requests virtually, ATF obtains the same information as it would otherwise obtain by phone, fax, or e-mail and in similar disaggregated form, that is, through multiple servers located at individual FFLs. Moreover, industry members retain possession and control of their disposition records and, according to ATF officials, may withdraw from using A2K—and remove their records from the ATF- accessible servers—at any time. For these reasons, we do not view A2K for in-business records to constitute the type of data aggregation prohibited by the appropriations act restriction on the consolidation or centralization of records within DOJ. During the course of our review, we found that when participating industry members permanently discontinued their operations, the disposition data maintained in connection with A2K was transferred to ATF, and ATF used the data when conducting firearms traces. Specifically, when an A2K participant went out of business, an ATF contractor remotely transferred the data on the server to a backup disk and the industry member shipped the backup disk with intact disposition records, as well as the blank server, to ATF’s NTC. ATF officials placed the data from the backup disk on a single partitioned server at NTC and accessed the data for firearms traces using the same type of interface and URL as used while the industry member was in business. As a result, in response to an industry member–specific query using an exact firearm serial number, the A2K out-of-business server would automatically generate the disposition information related to that firearm serial number. According to ATF, records of eight industry members were placed on the server at NTC from as early as late 2000 through mid-2012. While ATF estimated that there were approximately 20 million records associated with these industry members on the server, the agency did not have a means of ascertaining the actual number of records. The number of records on the ATF server would have been expected to grow as additional A2K participants discontinued their operations and provided their backup disks to ATF. However, during the course of our review, ATF officials told us that the agency planned to move all of the A2K records into OBRIS and that, once converted to OBRIS images, the records would be searchable like other OBRIS records. In January 2016, ATF officials reported that NTC was in the process of transferring all of the records from the A2K out-of-business records server to OBRIS and a quality-control process was under way to verify the accuracy of the transfer. They subsequently deleted all records from the server in March 2016. We conclude that ATF’s use of A2K with respect to out-of-business records violated the restriction on consolidation or centralization. In contrast to the discrete servers in the possession of the in-business industry members, ATF combined disposition records across industry members on the single, though partitioned, A2K server at NTC. In addition, the records were stored on the single A2K server in a manner that made them more easily searchable than other out-of-business records. Unlike OBRIS, which requires the manual review of potentially relevant records identified through an index, the A2K server within NTC generated records automatically in response to an industry member– specific text query, that is, exact firearm serial number. In addition, according to NTC officials, they could have modified the structure of the NTC server to achieve further aggregation, by programming the system to allow text searches across a broader set of data fields. As a result, ATF could have searched for records by name or other personal identifier. As explained earlier, our analysis of ATF’s aggregation of firearms records turns not on the presence or absence of retail purchaser information, but rather on the extent to which the aggregation of data corresponds to a statutory purpose. ATF’s maintenance of out-of- business industry members’ disposition records on a single server at NTC was not incident to the implementation of a specific statutory requirement. As discussed above, A2K was designed to allow in-business industry members to respond promptly to ATF trace requests as required by 18 U.S.C. § 923(g)(7) without having to dedicate personnel to this function. Section 923(g)(7), however, has no applicability to FFLs once they discontinue operations. A separate statutory provision, 18 U.S.C. § 923(g)(4), applies to FFLs that permanently discontinue their operations. ATF has long maintained a separate system—formerly the Microfilm Retrieval System and currently OBRIS—to hold the records submitted under that provision, and the disposition records that ATF maintained on the NTC server were among the types of records required to be submitted under section 923(g)(4) for which ATF had created that system. Therefore, we find no statutory underpinning for ATF’s maintenance of out-of-business A2K participants’ disposition records on the server at NTC. Our implementation of A2K included strict security protocols to limit ATF access to only that information to which it is statutorily required, e.g., the next step in the distribution of the traced firearm. That is, ATF would simply have access to the same information it could obtain by calling the participating FFL. However, that calculus is altered when an FFL ceases participation in A2K. At that point, that FFL’s records become just like any other FFL records and, as such, must be stored in the same manner. Otherwise, records which were formerly accessible on a discrete basis under A2K would be readily accessible in a database which would, in our opinion based on the 1996 GAO Report, violate the appropriation rider. Our decision, therefore, was to ensure that A2K records have the same character and are retrievable in the same manner as any other out-of-business records. In addition to removing all data from the A2K out-of-business records server, ATF officials reported that, going forward, the agency plans to convert records of A2K participants that go out of business directly into OBRIS images. However, they said, when such records are received by out-of-business FFLs, the time frame for converting the records into OBRIS images will depend on the backlog of electronic records awaiting conversion. Similarly, ATF officials told us that they had anticipated that A2K participants would submit acquisition and disposition records together, consistent with the format provided for in ATF’s regulations, for inclusion in OBRIS. They had not expected that A2K participants would satisfy any part of their statutory responsibility by providing their backup disks to the agency. However, even if industry members’ submission of disposition data on the backup disks could be said to be in furtherance of the portion of the statutory requirement pertaining to disposition records, given the existence and successful functioning of OBRIS, we conclude that ATF’s maintenance of those records on the NTC server went beyond the purposes of the Gun Control Act of 1968, as amended. We conclude that FRNP complies with the restriction on consolidation and centralization of firearms records when used as a tool for ATF agents in connection with an ATF criminal investigation. However, ATF’s use of FRNP to maintain information on firearms identified during regulatory inspections of FFLs under the Southwest Border Secondary Market Weapons of Choice Program (SWBWOC), as discussed below, was a violation of the restriction. Under section 599A of title 28, United States Code, ATF is responsible for investigating criminal and regulatory violations of federal firearms laws, and for carrying out any other function related to the investigation of violent crime or domestic terrorism that is delegated to it by the Attorney General. Among other things, ATF is responsible for enforcing federal statutes regarding firearms, including those regarding illegal possession, use, transfer, or trafficking. FRNP, formerly known as the Suspect Gun Program, was established in 1991 within the Firearms Tracing System to provide an investigative service to ATF agents conducting criminal investigations. Through this program, ATF records information— manufacturer, serial number, and type—about firearms that have not yet been recovered by other law enforcement authorities, but are suspected of being involved in criminal activity and are associated with an ATF criminal investigation. When such firearms are recovered, ATF uses the information available through the program to notify the investigating ATF official and to coordinate the release of trace results to other law enforcement authorities with the ongoing ATF investigation. To enter firearms information into the system, ATF agents investigating potential criminal activity involving firearms must identify the firearms at issue, the number of an open ATF criminal investigation, and at least one of five specified criteria for using the system. The five criteria correspond to bases for ATF investigation. ATF agents also indicate on the submission form whether NTC should release trace results to requesters of a trace for the firearms listed on the form. Where criminal investigations are ongoing and FRNP records are designated as “active,” NTC will notify the investigating ATF agent when the firearm described on the form is recovered. In addition, where the ATF agent has indicated that NTC should release trace information, NTC will notify the ATF agent and the requesting law enforcement agency of trace results. Where the ATF agent has indicated that NTC should not release trace information, the ATF agent is notified of the trace results and determines when that information may be released to the requesting law enforcement agency. For criminal investigations that have been closed, the FRNP record associated with the investigation is labeled “inactive,” although the records may provide investigative leads, according to ATF officials. In such cases, the ATF agent associated with the investigation is not notified of the recovery of the identified firearms or related trace requests, and the release of trace results to requesting law enforcement agencies proceeds without any delay. ATF is authorized by statute to investigate violations of federal firearms laws. As described above, FRNP is designed for the limited purpose of facilitating ATF’s conduct of specific criminal investigations under its jurisdiction. The inclusion of data in FRNP requires an open ATF investigation of an identified criminal matter, which helps to ensure that the data are maintained only as needed to support this investigative purpose. Further, ATF requires its agents to identify with specificity the firearms relevant to the investigation. As we observed in 1996, the restriction on consolidation or centralization does not preclude all data systems that involve an element of consolidation. Where ATF adheres to the limitations incorporated in the design of FRNP, the maintenance of information through FRNP is incident to ATF’s exercise of its statutory authority to conduct criminal investigations and does not involve the aggregation of data in a manner that goes beyond that purpose. In this respect, we conclude that it does not represent a consolidation or centralization of records in violation of the statutory restriction. In response to our inquiries about FRNP data, ATF officials told us that in 2009, the ATF Chief Counsel had concluded that the agency had violated the appropriations restriction in connection with the system. Specifically, ATF officials told us that the agency had maintained records on the inventories of certain FFLs in violation of the restriction, from 2007 through 2009 under ATF’s Southwest Border Secondary Market Weapons of Choice (SWBWOC) Program. We agree with the ATF Chief Counsel’s conclusion that its collection and maintenance of information in connection with this program violated the restriction on the consolidation or centralization of firearms records. In October 2005, the governments of the United States and Mexico instituted a cooperative effort to address surging drug cartel–driven violence in Mexico and along the southwest border of the United States. ATF’s main role in this initiative was to develop strategies and programs to stem the illegal trafficking of firearms from the United States to Mexico. ATF determined that used gun sales—referred to in the industry as “secondary market” sales—played a significant role in firearms trafficking to Mexico, particularly for the types of firearms most sought by the Mexican drug cartels, known as “weapons of choice.” Accordingly, in June 2007, the agency developed a protocol to be used during its annual inspections of FFLs in the region engaged primarily in the sale of used firearms. This protocol, known as the SWBWOC Program was intended to enhance ATF’s ability to track secondary market sales. It called for ATF investigators to record the serial number and description of all used weapons of choice in each FFL’s inventory and those sold or otherwise disposed of during the period covered by the inspection. Under the protocol, the investigators forwarded the information to the relevant ATF field division, which opened a single investigative file for all submissions from the area under its jurisdiction and determined whether any of the weapons had been traced since their last retail sale. After review, the field division forwarded the information to FRNP. According to ATF, the Dallas, Houston, and Los Angeles Field Divisions began to submit records from the SWBWOC Program to FRNP in July 2007, and the Phoenix Field Division began to do so in October 2007. The SWBWOC Program was cancelled on October 2, 2009, following a review by ATF’s Office of Chief Counsel of the process by which the secondary market weapons of choice information had been recorded and submitted to FRNP. The Office of Chief Counsel determined that the SWBWOC Program was not consistent with the consolidation or centralization restriction. It advised that information obtained from an FFL about a firearm in and of itself and unaccompanied by purchaser information could not be collected and consolidated absent a specific basis in statute or regulation, or a direct nexus to discrete law enforcement purposes such as a specific criminal investigation. The Office of Chief Counsel found that the collection of information from FFLs under the SWBWOC Program lacked these essential, individualized characteristics. We agree with ATF’s conclusion that the collection and maintenance of firearms information from the SWBWOC Program in FRNP exceeded the permissible scope of the appropriations act restriction. As discussed above, our analysis of ATF’s aggregation of firearms data turns not on the presence or absence of retail purchaser information, but rather on the extent to which the aggregation of data corresponds to a statutory purpose. Here, ATF collected and maintained acquisition and disposition data without a statutory foundation based on nothing more than the characteristics of the firearms. The collection and maintenance of information about a category of firearms, “weapons of choice,” from a category of FFLs, primarily pawnbrokers, did not pertain to a specific criminal investigation within the scope of ATF’s statutory investigative authority. Nor did it fall within the scope of ATF’s authority to conduct regulatory inspections. For this reason, we conclude that the program involved the type of aggregation of information contemplated by Congress when it passed the restriction on the consolidation or centralization of firearms records. ATF deleted the related data from FRNP in March 2016. The Gun Control Act of 1968, as amended, requires FFLs to report transactions involving the sales of multiple firearms. Specifically, under 18 U.S.C. § 923(g)(3)(A), an FFL is required to report sales or other dispositions of two or more pistols or revolvers to a non-FFL at one time or during 5 consecutive business days. Under these circumstances, the FFL is required to report information about the firearms, such as type, serial number, manufacturer, and model, and the person acquiring the firearms, such as name, address, ethnicity, race, identification number, and type of identification to ATF. ATF enters data from these reports into the MS portion of its Firearms Tracing System so that it can monitor and deter illegal interstate commerce in pistols and revolvers. Our 1996 report examined the Multiple Sales System and found that it did not violate the prohibition on the consolidation or centralization of firearms records because the collection and maintenance of records was incident to a specific statutory responsibility. In connection with our current review, we observed the functioning of the present system for reports of multiple sales. We found no changes since 1996 that would suggest a different conclusion with respect to ATF’s compliance with the appropriations act restriction. As we reported in 1996, a regulatory requirement for FFLs to prepare and provide multiple sales reports to ATF existed before the prohibition on consolidation or centralization of firearms records was enacted in fiscal year 1979 and there was no indication in the legislative history that the prohibition was intended to overturn ATF’s existing practices with respect to multiple sales. In addition, we explained that the Firearms Owners’ Protection Act had codified the ATF regulation, affirming the agency’s authority to collect this information. FOPA’s requirement that FFLs send the reports “to the office specified” on an ATF form suggested that ATF could specify that the information be sent to a central location. Our review of FOPA’s legislative history confirmed our interpretation of the statute. When considering the passage of FOPA, Congress clearly considered placing constraints on ATF’s maintenance of multiple sales reports, but declined to do so. Specifically, the Senate-passed version of FOPA prohibited the Secretary of the Treasury from maintaining multiple sales reports at a centralized location and from entering them into a computer for storage or retrieval. This provision was not included in the version of the bill that was ultimately passed. In light of the above, we reach the same conclusion as we did in 1996 and find that ATF’s use of MS complies with the restriction on the consolidation or centralization of firearms records. In addition, ATF has collected and maintained information on the multiple sales of firearms under a separate authority, 18 U.S.C. § 923(g)(5)(A). Section 923(g)(5)(A) authorizes the Attorney General to require FFLs to submit information that they are required to maintain under the Gun Control Act of 1968, as amended. This provision was also included in FOPA. Relying on this authority, ATF issues “demand letters” requiring FFLs to provide ATF with specific information. In 2011, ATF issued a demand letter requiring certain FFLs in Arizona, California, New Mexico, and Texas to submit reports of multiple sales or other dispositions of particular types of semiautomatic rifles to non-FFLs (referred to as “Demand Letter 3” reports). These reports are submitted to ATF and included in the MS portion of its Firearms Tracing System. According to ATF, the information was intended to assist in its efforts to investigate and combat the illegal movement of firearms along and across the southwest border. Several FFLs challenged the legality of ATF’s demand letter, asserting, among other things, that it would create a national firearms registry in violation of the fiscal year 2012 appropriations act restriction. In each of the cases, the court placed ATF’s initiative in its statutory context and held that the appropriations act did not prohibit ATF’s issuance of the demand letter. Similar to our 1996 analyses of the Out-of-Business Records and Multiple Sales Systems, the United States Court of Appeals for the Fifth Circuit examined the enactment of ATF’s authority to issue demand letters in relation to the appropriations act restriction. The court observed that ATF’s demand letter authority was enacted as part of FOPA and that because FOPA “clearly contemplate ATF’s collection of some firearms records,” the appropriations provision did not prohibit “any collection of firearms transaction records.” In this regard, the court further noted that the plain meaning of “consolidating or centralizing” did not prohibit the collection of a limited amount of information. Other courts also emphasized that the ATF 2011 demand letter required FFLs to provide only a limited subset of the information that they were required to maintain, as opposed to the substantial amount of information that they believed would characterize a “consolidation or centralization.” For example, the Court of Appeals for the District of Columbia Circuit enumerated the limitations on ATF’s 2011 collection of information, noting that it applied to (1) FFLs in four states; (2) who are licensed dealers and pawnbrokers; (3) and who sell two or more rifles of a specific type; (4) to the same person; (5) in a 5-business-day period. The court found that because ATF sent the demand letter to a limited number of FFLs nationwide and required information on only a small number of transactions, “the . . . demand letter does not come close to creating a ‘national firearms registry.’” In light of the court decisions regarding ATF’s exercise of its statutory authority in this context, we conclude that the Demand Letter 3 initiative does not violate the restriction on the consolidation or centralization of firearms records. Two of the data systems under review, OBRIS and MS, comply with the provision in ATF’s fiscal year 2012 appropriation prohibiting the use of funds for the consolidation or centralization of firearms records. ATF collects and maintains firearms transaction information in each system incident to the implementation of specific statutory authority and it does not exceed those statutory purposes. ATF’s A2K system for in-business FFLs and its maintenance of certain firearms information pertinent to criminal investigations in FRNP are likewise consistent with the appropriations act restriction. However, ATF’s collection and maintenance of out-of-business A2K records on the server at NTC violated the restriction, as did its collection and maintenance of data from certain FFLs as part of the SWBWOC Program. In both cases, ATF’s aggregation of information was not supported by any statutory purpose. ATF’s failure to comply with the prohibition on the consolidation or centralization of firearms records also violated the Antideficiency Act. The Antideficiency Act prohibits making or authorizing an expenditure or obligation that exceeds available budget authority. As a result of the statutory prohibition, ATF had no appropriation available for the salaries or administrative expenses of consolidating or centralizing records, or portions of records, of the acquisition and disposition of firearms in connection with the SWBWOC Program or A2K for out-of-business records. The Antideficiency Act requires that the agency head “shall report immediately to the President and Congress all relevant facts and a statement of actions taken.” In addition, the agency must send a copy of the report to the Comptroller General on the same date it transmits the report to the President and Congress. In addition to the contact named above, Dawn Locke (Assistant Director) and Rebecca Kuhlmann Taylor (Analyst-in-Charge) managed this work. In addition, Willie Commons III, Susan Czachor, Michele Fejfar, Justin Fisher, Farrah Graham, Melissa Hargy, Jan Montgomery, and Michelle Serfass made significant contributions to the report. Also contributing to this report were Dominick M. Dale, Juan R. Gobel, Eric D. Hauswirth, Ramon J. Rodriguez, and Eric Winter. | ATF is responsible for enforcing certain criminal statutes related to firearms, and must balance its role in combatting the illegal use of firearms with protecting the privacy rights of law-abiding gun owners. As part of this balance, FFLs are required to maintain firearms transaction records, while ATF has the statutory authority to obtain these records under certain circumstances. ATF must also comply with an appropriations act provision that restricts the agency from using appropriated funds to consolidate or centralize FFL records. GAO was asked to review ATF's compliance with this restriction. This report (1) identifies the ATF data systems that contain retail firearms purchaser data and (2) determines whether selected ATF data systems comply with the appropriations act restriction and adhere to ATF policies. GAO reviewed ATF policy and program documents, observed use of data systems at NTC, reviewed a generalizable sample of one system's records, and interviewed ATF officials at headquarters and NTC. To carry out its criminal and regulatory enforcement responsibilities, the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) has 25 firearms-related data systems, 16 of which contain retail firearms purchaser information from a federal firearms licensee (FFL)—such as firearms importers and retailers. GAO selected 4 systems for review that are used in the firearms tracing process, based on factors such as the inclusion of retail purchaser information and original data. The Out-of-Business Records Imaging System (OBRIS) stores nonsearchable images of firearms records from out-of-business FFLs. Such FFLs are required by law to provide their records to ATF. Access 2000 (A2K) provides servers for National Tracing Center (NTC) personnel to electronically search participating FFLs' records at their premises for firearms disposition information during a trace. The Firearm Recovery Notification Program (FRNP) maintains information on firearms that have not yet been recovered by law enforcement, but are suspected of being involved in criminal activity and are associated with an ATF criminal investigation. Multiple Sales (MS) includes firearms information from multiple sales reports. FFLs are required by law to report to ATF sales of two or more revolvers or pistols during 5 consecutive business days. ATF policy requires that certain information in MS be deleted after 2 years if the firearm has not been connected to a trace. Of the 4 data systems, 2 fully comply and 2 did not always comply with the appropriations act restriction prohibiting consolidation or centralization of FFL records. ATF addressed these compliance issues during the course of GAO's review. ATF also does not consistently adhere to its policies. Specifically: OBRIS complies with the restriction and adheres to policy. A2K for in-business FFL records complies with the restriction. A2K for out-of-business FFL records did not comply with the restriction because ATF maintained these data on a single server at ATF. Thus, ATF deleted the records in March 2016. In addition, ATF policy does not specify how, if at all, FFLs may use A2K records to meet out-of-business record submission requirements. Such guidance would help ensure they submit such records. FRNP generally complies with the restriction. However, a 2007 through 2009 program using FRNP did not comply. ATF cancelled this program in 2009 and deleted the related data in March 2016. Also, a technical defect allows ATF agents to access FRNP data—including purchaser data—beyond what ATF policy permits. Aligning system capability with ATF policy would ensure that firearms purchaser data are only provided to those with a need to know. MS complies with the restriction, but ATF inconsistently adheres to its policy when deleting MS records. Specifically, until May 2016, MS contained over 10,000 names that were not consistently deleted within the required 2 years. Aligning the MS deletion policy with the timing of deletions could help ATF maintain only useful MS purchaser data and safeguard privacy. GAO recommends that ATF provide guidance to FFLs participating in A2K on the provision of records to ATF when they go out of business; align system capability with ATF policy to limit access to FRNP firearms purchaser information for ATF agents; and align timing and ATF policy for deleting MS records. ATF concurred with our recommendations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Multiemployer plans are established pursuant to collectively bargained pension agreements negotiated between labor unions representing employees and two or more employers and are generally jointly administered by trustees from both labor and management. Multiemployer plans typically cover groups of workers in such industries as trucking, building and construction, and retail food sales. These plans provide participants limited benefit portability in that they allow workers the continued accrual of defined benefit pension rights when they change jobs, if their new employer is also a sponsor of the same plan. This arrangement can be particularly advantageous in industries like construction, where job change within a single industry is frequent over the course of a career. Multiemployer plans are distinct from single- employer plans, which are established and maintained by only one employer and where the plans may or may not be collectively bargained. Multiemployer plans also differ from so called multiple-employer plans that are not generally established through collective bargaining agreements and where many such plans have separate funding accounts for each employer. Since the enactment of the National Labor Relations Act (NLRA), in 1935, collective bargaining has been the primary means by which workers can negotiate, through unions, the terms of their pension plan. In 1935, NLRA required employers to bargain with union representatives over wages and other conditions of employment, and subsequent court decisions established that employee benefit plans could be among those conditions. The Taft Hartley Act amended NLRA to establish terms for negotiating such employee benefits and placed certain restrictions on the operation of any plan resulting from those negotiations. For example, employer contributions cannot be made to a union or its representative but must be made to a trust that has an equal balance of union and employer representation. Since its enactment in 1974, multiemployer defined benefit pensions have been regulated by the Employee Retirement Income Security Act (ERISA), which Congress passed to protect the interests of participants and beneficiaries covered by private sector employee benefit plans. Title IV of ERISA created PBGC as a U. S. Government corporation to insure the pensions of participants and beneficiaries in private sector-defined benefit plans. In 1980, Congress enacted the Multiemployer Pension Plan Amendments Act (MPPAA) of 1980 to protect the pensions of participants in multiemployer plans by establishing a separate PBGC multiemployer plan insurance program and by requiring any employer wanting to withdraw from a multiemployer plan to be liable for its share of the plan’s unfunded liability. This amount is based upon a proportional share of the plans’ unfunded vested benefits. Liabilities that cannot be collected from a withdrawn employer are “rolled over” and must eventually be funded by the plans remaining employers. PBGC operates distinct insurance programs, for multiemployer plans and single-employer plans, which have separate insurance funds, different benefit guarantee rules, and different insurance coverage rules. The two insurance programs and PBGC’s operations are financed through premiums paid annually by plan sponsors, investment returns on PBGC assets, assets acquired from terminated single employer plans, and by recoveries from employers responsible for underfunded terminated single employer plans. Premium revenue totaled about $973 million in 2003, of which $948 million was paid into the single-employer program and $25 million paid to the multiemployer program. Over the last few years, the finances of PBGC’s single-employer insurance program have taken a severe turn for the worse. Although the program registered a $9.7 billion accumulated surplus as recently as 2000, it reported a $11.2 billion accumulated deficit for fiscal year 2003, primarily brought on by the termination of a number of large underfunded pension plans. Several underlying factors contributed to the severity of the plans’ underfunded condition at termination, including a sharp decline in the stock market, which reduced plan asset values, and a general decline in interest rates, which increased the cost of terminating defined benefit pension plans. Because of its accumulated deficit, the significant risk that other large underfunded plans might terminate and other structural factors, we designated PBGC’s single-employer pension insurance program as a “high risk” program and added it to the list of agencies and major programs that we believe need urgent attention. In general, the same ERISA funding rules apply to both single and multiemployer defined benefit pension plans. However, there are some important differences. For example, while single-employer plan sponsors can adjust their pension contributions to meet their needs, within the overall set of ERISA and Internal Revenue Code (IRC) rules, individual employers in multiemployer plans cannot as easily adjust their plan contributions. For multiemployer plans, contribution levels are usually negotiated through the collective bargaining process and are fixed for the term of the collective bargaining agreement, typically 2 to 3 years. Benefit levels are generally also fixed by the contract or by the plan trustees. Employer contributions to multiemployer plans are typically made on a set dollar amount per hour of covered work. For many multiemployer plans, contributions are directly tied to the total number of hours worked, and thus, to the number of active plan participants. With other things being equal, the reduced employment of active participants will result in lower contributions and reduced plan funding. The U. S. employer-sponsored pension system has historically been an important component of total retirement income, providing roughly 18 percent of aggregate retirement income in 2000. However, millions of workers continue to face the prospect of retirement with no income from an employer-sponsored pension. The percentage of the workforce with pension coverage has been near 50 percent since the 1970s. Lower-income workers, part-time employees, employees of small businesses, and younger workers typically have lower rates of pension coverage. Retirees with pension incomes are more likely to avoid poverty. For example, 21 percent of retired persons without pension incomes had incomes below the federal poverty level, compared with 3 percent with pension incomes. Of those workers covered by a pension, such coverage is increasingly being provided by defined contribution pension plans. Surveys have reported a worker preference for defined contribution plans, with employers citing worker preference for transparency of plan value and improved benefit portability. As of 1998, the most recent published data available, 27 percent of the private sector labor force was covered by a DC plan, as their primary pension plan, up from 7 percent in 1979. While multiemployer plan funding has exhibited considerable stability over the past 2 decades, available data suggest that many plans have recently experienced significant funding declines. Since 1980, aggregate multiemployer plan funding has been stable, with the majority of plans funded above 90 percent of total liabilities and average funding at 105 percent by 2000. Recently, however, it appears that a combination of stock market declines coupled with low interest rates and poor economic conditions have reduced the assets and increased the liabilities of many multiemployer plans. In PBGC’s 2003 Annual Report, the agency estimated that total underfunding of underfunded multiemployer plans reached $100 billion by year-end, from $21 billion in 2000, and that its multiemployer program had recorded a year-end 2003 deficit of $261 million, the first deficit in more than 20 years. While most multiemployer plans continue to provide benefits to retirees at unreduced levels, the agency has also increased its forecast of the number of plans that will likely need financial assistance, from 56 plans in 2001 to 62 plans in 2003. Private survey data are consistent with this trend, with one survey by an actuarial consulting firm showing the percentage of fully funded client plans declining from 83 percent in 2001 to 67 percent in 2002. In addition, long-standing declines in the number of plans and worker participation continue. The number of insured multiemployer plans has dropped by a quarter since 1980 to fewer than 1,700 plans in 2003, the latest data available. Although in 2001, multiemployer plans in the aggregate covered 4.7 million active participants, representing about a fifth of all defined benefit plan participants, this number has dropped by 1.4 million since 1980. Aggregate funding for multiemployer pension plans remained stable during the 1980s and 1990s. By 2000, the majority of multiemployer plans reported assets exceeding 90 percent of total liabilities, with the average plan funded at 105 percent of liabilities. As shown in figure 1, the aggregate net funding of multiemployer plans grew from a deficit of about $12 billion in 1980 to a surplus of nearly $17 billion in 2000. From 1980 to 2000, multiemployer plan assets grew at an annual average rate of 11.7 percent, to about $330 billion, exceeding the average 10.5 percent annual percentage growth rate of single-employer plan assets. During the same time period, liabilities for multiemployer and single-employer pensions grew at an average annual rate of about 10.2 percent and 9.9 percent, respectively. A number of factors appear to have contributed to the funding stability of multiemployer plans, including: Investment Strategy—Historically, multiemployer plans appear to have invested more conservatively than their single-employer counterparts. Although comprehensive data are not available, some pension experts have suggested that defined benefit plans in the aggregate are more than 60 percent invested in equities, which are associated with greater risk and volatility than many fixed-income securities. Experts have stated that, in contrast, equity holdings generally comprise 55 percent or less of the assets of most multiemployer plans. Contribution Rates—Unlike single-employer plans, multiemployer plan funds receive steady contributions from employers because those amounts generally have been set through multiyear collective bargaining contracts. Participating employers, therefore, have less flexibility to vary their contributions in response to changes in firm performance, economic conditions, and other factors. This regular contribution income is in addition to any investment return and helps multiemployer plans offset any declines in investment returns. Risk Pooling—The pooling of risk inherent in multiemployer pension plans may also have buffered them against financial shocks and recessions since the contributions to the plans are less immediately affected by the economic performance of individual employer plan sponsors. Multiemployer pension plans typically continue to operate long after any individual employer goes out of business because the remaining employers in the plan are jointly liable for funding the benefits of all vested participants. Greater Average Plan Size—The stability of multiemployer plans may also be due in part to their size. Large plans (1,000 or more participants) constitute a greater proportion of multiemployer plans than of single- employer plans. (See figs. 2 and 3.) While 55 percent of multiemployer plans are large, only 13 percent of single-employer plans are large and 73 percent of single-employer plans have had fewer than 250 participants, as shown in figure 2. However, distribution of participants by plan size for multiemployer and single-employer plans is more comparable, with over 90 percent of both multiemployer and single-employer participants in large plans, as shown in figure 3. Although data limitations preclude any comprehensive assessment, available evidence suggests that since 2000, many multiemployer plans have recently experienced significant reductions in their funded status. PBGC estimated in its 2003 Annual Report that the aggregate deficit of underfunded multiemployer plans had reached $100 billion by year-end, up from a $21 billion deficit at the start of 2000. In addition, PBGC reported its own multiemployer insurance program deficit of $261 million for fiscal year 2003, the first deficit since 1981 and its largest ever. (See fig. 4.) While most multiemployer plans continue to provide benefits to retirees at unreduced levels, PBGC has also reported that the deficit was primarily caused by new and substantial probable losses, increasing the number of plans it classifies as likely requiring financial assistance in the near future from 58 plans with expected liabilities of $775 million in 2002 to 62 plans with expected liabilities of $1.25 billion in 2003. Private survey data and anecdotal evidence are consistent with this assessment of multiemployer funding losses. One survey by an actuarial consulting firm showed that the percentage of its multiemployer client plans that were fully funded declined from 83 percent in 2001 to 67 percent in 2002. Other, more anecdotal evidence suggests increased difficulties for multiemployer plans. Discussions with plan administrators have indicated that there has been an increase in the number of plans with financial difficulties in recent years, with some plans reducing or temporarily freezing the future accruals of participants. In addition, IRS officials recently reported an increase in the small number of multiemployer plans (less than 1 percent of all multiemployer plans) requesting tax-specific waivers that would provide plans relief from current funding shortfall requirements. As with single-employer plans, falling interest rates coincident with stock market declines and generally weak economic conditions have contributed to the funding difficulties of many multiemployer plans. The decline in interest rates in recent years has increased pension plan liabilities for DB plans in general, because their liability for future promised benefits increases when computed using a lower interest rate. At the same time, declining stock markets decreased the value of any equities held in multiemployer plan portfolios to meet those obligations. Finally, because multiemployer plan contributions are usually based on the number of hours worked by active participants, any reduction in their employment will reduce employer contributions to the plan. Over the past 2 decades, the multiemployer system has experienced a steady decline in the number of plans and in the number of active participants. In 1980, there were 2,244 plans and by 2003 the number had fallen to 1,631, a decline of about 27 percent. While a portion of the decline in the number of plans can be explained by consolidations through mergers, few new plans have been formed, only 5, in fact, since 1992. Meanwhile, the number of active multiemployer plan participants has declined both in relative and absolute terms. By 2001, only about 4.1 percent of the private sector workforce was comprised of active participants in multiemployer pension plans, down from 7.7 percent in 1980 (see fig. 5), with the total number of active participants decreasing from about 6.1 million to about 4.7 million. Finally, as the number of active participants has declined, the number of retirees increased—from about 1.4 million to 2.8 million, and this increase had led to a decline in the ratio of active (working) participants to retirees in multiemployer plans. By 2001, there were about 1.7 active participants for every retiree, compared with 4.3 in 1980. (See fig. 6.) While the trend is also evident among single-employer plans, the decline in the ratio of active workers to retirees affects multiemployer funding more directly because employer contributions are tied to active employment. PBGC’s role regarding multiemployer plans includes monitoring plans for financial problems, providing technical and financial assistance to troubled plans, and guaranteeing a minimum level of benefits to participants in insolvent plans. For example, PBGC annually reviews the financial condition of multiemployer plans to identify those that may have potential financial problems in the near future. Agency officials told us that troubled plans often solicit their technical assistance since under the multiemployer framework, affected parties have a vested interest in a plan’s survival. Occasionally, PBGC is asked to serve as a facilitator where the agency works with all the parties associated with the troubled plan to improve its financial status. Examples of such assistance by PBGC include facilitating the merger of troubled plans into one stronger plan and the “orderly shutdown” of plans, allowing the affected employers to continue to operate and pay benefits until all liabilities are paid. Unlike its role in the single-employer program where PBGC trustees weak plans and pays benefits directly to participants, PBGC does not take over the administration of multiemployer plans, but instead, upon application, provides financial assistance in the form of loans when plans become insolvent and are unable to pay benefits at PBGC-guaranteed levels. Such financial assistance is infrequent; for example, PBGC has made loans totaling $167 million to 33 multiemployer plans since 1980 compared with 296 trusteed terminations of single-employer plans and PBGC benefit payments of over $4 billion in 2002-2003 alone. PBGC officials believe that the low frequency of PBGC financial assistance to multiemployer plans is likely due to specific features of the multiemployer insurance regulatory framework: (1) the employers sponsoring the plan share the risk for providing benefits to all participants in the plan and (2) benefit guarantees are set at a lower level for the multiemployer insurance program compared with the guarantees provided by the single-employer program. Agency officials say that together these features encourage the affected parties to collaborate on their own to address the plan’s financial difficulties. Several of PBGC’s functions regarding its multiemployer program and its single-employer program are similar. For example, under both programs PBGC monitors the financial condition of all plans to identify those that are at-risk of requiring financial assistance. The agency maintains a database of financial information about such plans that draws its data from both PBGC premium filings and the Form 5500. Using an automated screening process that measures each plan against funding and financial standards, the agency determines which plans may be at risk of termination or insolvency. For both, PBGC also annually identifies plans that it considers probable or reasonably possible liabilities and enumerates their aggregate unfunded liabilities in the agency’s annual financial statements for each program. The type of assistance PBGC provides to troubled plans through its multiemployer program is shaped to a degree by the program’s definition of the “insurable event.” PBGC insures against multiemployer plan insolvency. A multiemployer plan is insolvent when its available resources are not sufficient to pay the level of benefits at PBGC’s multiemployer guaranteed level for 1 year. In such cases, PBGC will provide the needed financial assistance in the form of a loan. If the plan recovers from insolvency, it must begin repaying the loan on a commercially reasonable schedule in accordance with regulations. Under MPPAA, unlike its authority towards single-employer plans, PBGC does not takeover or otherwise assume responsibility for the liabilities of a financially troubled multiemployer plan. PBGC sometimes provides technical assistance to help multiemployer plan administrators improve their funding status or for help on other issues. Plan administrators may contact PBGC’s customer service representatives at designated offices to obtain assistance on such matters as premiums, plan terminations, and general legal questions related to PBGC. Agency officials told us that on a few occasions PBGC has worked with plan administrators to facilitate plan mergers, “orderly shutdowns,” and other arrangements to protect plan participants’ benefits. For example, in 1997, PBGC worked with the failing Local 675 Operating Engineers Pension Fund and the Operating Engineers Central Pension Fund to effect a merger of the two plans. However, PBGC officials also told us that the majority of mergers are crafted by private sector parties and have no substantial PBGC involvement. PBGC has also on occasion assisted in the orderly shutdown of plans. For example, agency officials told us that, in 2001, they helped facilitate the shutdown of the severely underfunded Buffalo Carpenters’ Pension Fund. PBGC has the authority to approve certain plan rules governing withdrawal liability payments and did so in this case approving the plan’s request to lower its annual payments, which made it possible for the employers to remain in business and pay benefits until all liabilities were paid. In those cases where a multiemployer plan cannot pay guaranteed benefits, PBGC provides financial assistance in the form of a loan to allow the plan to continue to pay benefits at the level guaranteed by PBGC. A multiemployer plan need not be terminated to qualify for PBGC loans, but must be insolvent and is allowed to reduce or suspend payment of that portion of the benefit that exceeds the PBGC guarantee level. The number of loans and amount of financial assistance from PBGC to multiemployer plans has been small in comparison to the benefits paid out under its single-employer program. Since 1980, the agency has provided loans to 33 plans totaling $167 million. In 2003, PBGC provided $5 million in loans to 24 multiemployer plans. This compares with 296 trusteed terminations of single-employer plans and PBGC benefit payments of over $4 billion to single-employer plan beneficiaries in 2002 and 2003 alone. PBGC officials say that this lower frequency of financial assistance is primarily due to key features of the multiemployer regulatory framework. First, in comparison to that governing the single-employer program, the regulatory framework governing multiemployer plans places greater financial risks on employers and workers and relatively less on PBGC. For example, in the event of the bankruptcy of an employer in a multiemployer plan, the remaining employers in the plan remain responsible for funding all plan benefits. Under the single-employer program, a comparable employer bankruptcy could leave PBGC responsible for any plan liabilities up to the PBGC-guaranteed level. In addition, the law provides a disincentive for employers seeking to withdraw from an underfunded plan by imposing a withdrawal liability based on its share of the plan’s unfunded vested benefits. Another key feature is that multiemployer plan participants also bear greater risk than their single-employer counterparts because PBGC guarantees benefits for multiemployer pensioners at a much lower dollar amount than for single-employer pensioners: about $13,000 for 30 years of service for the former compared with about $44,000 annually per retiree at age 65 for the latter. PBGC officials explained that this greater financial risk on employers and lower guaranteed benefit level for participants in practice creates incentives for employers, participants, and their collective bargaining representatives to avoid insolvency and to collaborate in trying to find solutions to the plan’s financial difficulties. The smaller size of PBGC’s multiemployer program might also contribute to the lower frequency of assistance. The multiemployer program’s $1 billion in assets and $1.3 billion in liabilities accounts for a relatively small portion of PBGC’s total assets and liabilities, representing less than 3 percent of the total. Further, the multiemployer program covers just 22 percent of all defined benefit plan participants. There are also many fewer plans in the multiemployer program, about 1,700, as compared with about 30,000 single-employer plans. Other things equal, there are fewer opportunities for potential PBGC assistance to multiemployer plans than to single-employer plans. A number of factors pose challenges to the long-term prospects of the multiemployer pension plan system. Some of these factors are specific to the features and nature of multiemployer plans, including a regulatory framework that some employers may perceive as financially riskier and less flexible than those covering other types of pension plans. For example, compared with a single-employer plan, an employer covered by a multiemployer plan cannot easily adjust annual plan contributions in response to the firm’s own financial circumstances. Collective bargaining itself, a necessary aspect of the multiemployer plan model and another factor affecting plans’ prospects, has also been in long-term decline, suggesting fewer future opportunities for new plans to be created or existing ones to expand. As of 2003, union membership, a proxy for collective bargaining coverage, accounted for less than 9 percent of the private sector labor force and has been steadily declining since 1953. Experts have identified other challenges to the future prospects of defined benefit plans generally, including multiemployer plans. These include the growing trend among employers to choose defined contribution plans over DB plans, including multiemployer plans, the continued growing life expectancy of American workers, resulting in participants spending more years in retirement, thus increasing benefit costs, and increases in employer-provided health insurance costs, which are increasing employers’ total compensation costs generally, making them less willing or able to increase elements of compensation, like wages or pensions. Some factors that raise questions about the long-term viability of multiemployer plans are specific to certain features of multiemployer plans themselves, including features of the regulatory framework that some employers may well perceive as less flexible and financially riskier than the features of other types of pension plans. For example, an employer covered by a multiemployer pension plan typically does not have the funding flexibility of a comparable employer sponsoring a single- employer plan. In many instances, the employer covered by the multiemployer plan cannot as easily adjust annual plan contributions in response to the firm’s own financial circumstances. This is because contribution rates are often fixed for periods of time by the provisions of the collective bargaining agreement. Employers that value such flexibility might be less inclined to participate in a multiemployer plan. Employers in multiemployer plans may also face greater financial risks than those in other forms of pension plans. For example, an employer sponsor of a multiemployer plan that wishes to withdraw from the plan is liable for its share of pension plan benefits not covered by plan assets upon withdrawal from the plan, rather than when the plan terminates. Employers in plans with unfunded vested benefits face an immediate withdrawal liability that can be costly, while employers in fully funded plans face the potential of costly withdrawal liability if the plan becomes underfunded in the future. Thus, an employer’s pension liabilities become a function not only of the employer’s own performance but also the financial health of other employer plan sponsors. These additional sources of potential liability can be difficult to predict, increasing employers’ level of uncertainty and risk. Some employers may hesitate to accept such risks if they can sponsor other plans that do not have them, such as 401(k) type defined contribution plans. The future growth of multiemployer plans is also predicated on the future growth prospects of collective bargaining. Collective bargaining is an inherent feature of the multiemployer plan model. Collective bargaining, however, has been declining in the United States since the early 1950s. Currently, union membership, a proxy for collective bargaining coverage, accounts for less than 9 percent of the private sector labor force. In 1980, union membership accounted for about 19 percent of the civilian workforce and about 27 percent of the civilian workforce in 1953. Pension experts have suggested a variety of challenges faced by today’s defined benefit pension plans, including multiemployer plans. These include the continued general shift away from DB plans to defined contribution plans, and the increased longevity of the U.S. population, which translates into a lengthier and more costly retirement. In addition, the continued escalation of employer health insurance costs has placed pressure on the compensation costs of employers, including pensions. Employers have tended to move away from DB plans and towards DC plans since the mid 1980s. The number of PBGC-insured defined benefit plans declined from 97,683 in 1980 to 31,135 in 2002. (See fig. 7.) The number of defined contribution plans sponsored by private employers nearly doubled from 340,805 in 1980 to 673,626 in 1998. Along with this continuing trend to sponsoring DC plans, there has also been a shift in the mix of plans that private sector workers participate. Labor reports that the percentage of private sector workers who participated in a primary DB plan has decreased from 38 percent in 1980 to 21 percent by 1998, while the percentage of such workers who participated in a primary DC plan has increased from 8 to 27 percent during this same period. Moreover, these same data show that, by 1998, the majority of active participants (workers participating in their employer’s plan) were in DC plans, whereas nearly 20 years earlier the majority of participants were in DB plans. Experts have suggested a variety of explanations for this shift, including the greater risk borne by employers with DB plans, greater administrative costs and more onerous regulatory requirements, and that employees more easily understand and favor DC plans. These experts have also noted considerable employee demand for plans that state benefits in the form of an account balance and emphasize portability of benefits, such as is offered by 401(k) type defined contribution pension plans. The increased life expectancy of workers also has important implications for defined benefit plan funding, including multiemployer plans. The average life expectancy of males at birth has increased from 66.6 in 1960 to 74.3 in 2000, with females at birth experiencing a rise of 6.6 years from 73.1 to 79.7 over the same period. As general life expectancy has increased in the United States, there has also been an increase in the number of years spent in retirement. PBGC has noted that improvements in life expectancy have extended the average amount of time spent by workers in retirement from 11.5 years in 1950 to 18 years for the average male worker as of 2003. This increased duration of retirement has placed pressure on employers with defined benefit plans to increase their contributions to match this increase in benefit liabilities. This problem can be further exacerbated for those multiemployer plans with a shrinking pool of active workers because plan contributions are generally paid on a per work-hour basis, and thus employers may have to increase contributions for each hour worked by the remaining active participants to fund any liability increase. Increasing health insurance costs are another factor affecting the long- term prospects of pensions, including multiemployer pensions. Recent increases in employer provided health insurance costs are accounting for a rising share of total compensation, increasing pressure on employers’ ability to maintain wages and other benefits, including pensions. Bureau of Labor Statistics data show that the cost of employer provided health insurance has risen steadily in recent years, rising from 5.4 percent of total compensation in 1999 to 6.5 percent as of the third quarter of 2003. A private survey of employers found that employer-sponsored health insurance costs rose about 14 percent between the spring of 2002 and the spring of 2003, the third consecutive year of double digit acceleration and the highest premium increase since 1990. Plan administrators and employer and union representatives that we talked with identified the rising costs of employer provided health insurance as a key problem facing plans, as employers are increasingly forced to choose between maintaining current levels of pension or medical benefits. Although available evidence suggests that multiemployer plans are not experiencing anywhere near the magnitude of the problems that have recently afflicted the single-employer plans, there is cause for concern. Most significant is PBGC’s estimate of $100 billion in unfunded multiemployer plan liabilities that are being borne collectively by employer sponsors and plan participants. Compared to the single- employer program, PBGC does not face the same level of exposure from this liability at this time. This is because, as PBGC officials have noted, the current regulatory framework governing multiemployer plans redistributes financial risk towards employers and workers and away from the government and potentially the taxpayer. Employers face withdrawal and other liabilities that can be significant, while workers face the prospect of receiving guaranteed benefits far lower and with benefit reduction at levels well below the guaranteed limits than those provided by PBGC’s single-employer program, should their plan become insolvent. Together, not only do these features limit the exposure to PBGC and the taxpayer, they create important incentives for all interested parties to resolve difficult financial situations that could otherwise result in plan termination. However, the declines in interest rates and equities markets, and weak economic conditions in the early 2000s, have increased the financial stress on both individual multiemployer plans and the multiemployer framework generally. Proposals to address this stress should be carefully designed and considered for their longer-term consequences. For example, proposals to shift plan liabilities to PBGC by making it easier for employers to exit multiemployer plans could help a few employers or participants but erode the existing incentives that encourage interested parties to independently face up to their financial challenges. In particular, placing additional liabilities on PBGC could ultimately have serious potential consequences for the taxpayer, given that with only about $25 million in annual income, a trust fund of less than $1 billion, and a current deficit of $261 million, PBGC’s multiemployer program has very limited resources to handle a major plan insolvency that could run into billions of dollars. The current congressional efforts to provide funding relief are at least in part in response to the difficult conditions experienced by many plans in recent years. However, these efforts are also occurring in the context of the broader, long-term decline in private sector defined benefit plans, including multiemployer plans, and the attendant rise of defined contribution plans, with their emphasis on greater individual responsibility for providing for a secure retirement. Such a transition could lead to greater individual control and reward for prudent investment and planning. However, if managed poorly, it could lead to adverse distributional effects for some workers and retirees, including a greater risk of a poverty level income in retirement. Under this transition view, the more fundamental issues concern how to minimize the potentially serious, negative effects of the transition, while balancing risks and costs for employers, workers, and retirees, and the public. These important policy concerns make Congress’s current focus on pension reform both timely and appropriate. We provided a draft of this report to Labor, Treasury, and PBGC. The agencies provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Labor, the Secretary of the Treasury, and the Executive Director of the Pension Benefit Guaranty Corporation; appropriate congressional committees; and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions concerning this report, please contact me at (202) 512-5932. Other major contributors include Joseph Applebaum, Orin B. Atwater, Susan Bernstein, Kenneth J. Bombara, Tim Fairbanks, Charles Jeszeck, Gene Kuehneman, Raun Lazier, and Roger J. Thomas. | Multiemployer-defined benefit pension plans, which are created by collective bargaining agreements covering more than one employer and generally operated under the joint trusteeship of labor and management, provide coverage to over 9.7 million of the 44 million participants insured by the Pension Benefit Guaranty Corporation (PBGC). The recent termination of several large single-employer plans--plans sponsored by individual firms--has led to millions of dollars in benefit losses for thousands of workers and left PBGC, their public insurer, an $11.2 billion deficit as of September 30, 2003. The serious difficulties experienced by these single-employer plans have prompted questions about the health of multiemployer plans. This report provides the following information on multiemployer pension plans: (1) trends in funding and worker participation, (2) PBGC's role regarding the plans' financial solvency, and (3) potential challenges to the plans' long-term prospects. Following 2 decades of relative financial stability, multiemployer plans as a group appear to have suffered recent and significant funding losses, while long-term declines in participation and new plan formation continue unabated. At the close of the 1990s, the majority of multiemployer plans reported assets exceeding 90 percent of total liabilities. Recently, however, stock market declines, coupled with low interest rates and poor economic conditions, appear to have reduced assets and increased liabilities for many plans. PBGC reported an accumulated net deficit of $261 million for its multiemployer program in 2003, the first since 1981. Meanwhile, since 1980, the number of plans has declined from over 2,200 to fewer than 1,700 plans, and there has been a long-term decline in the total number of active workers. PBGC monitors those multiemployer plans, which may, in PBGC's view, present a risk of financial insolvency. PBGC also provides technical and financial assistance to troubled plans and guarantees a minimum level of benefits to participants in insolvent plans. PBGC annually reviews the financial condition of plans to determine its potential insurance liability. Although the agency does not trustee the administration of insolvent multiemployer plans as it does with single-employer plans, it does offer them technical assistance and loans. PBGC loans have been rare, with loans to only 33 plans, totaling $167 million since 1980. Several factors pose challenges to the long-term prospects of the multiemployer system. Some are inherent to the multiemployer regulatory framework, such as the greater perceived financial risk and reduced flexibility for employers compared to other plan designs, and suggest that fewer employers will find such plans attractive. Also, the long-term decline of collective bargaining results in fewer new participants to expand or create new plans. Other factors threaten all defined benefit plans, including multiemployer plans: the growing trend among employers to choose defined contribution plans; the increasing life expectancy of workers, which raises the cost of plans; and continuing increases in employer health insurance costs, which compete with pensions for employer funding. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Since the terrorist attacks of 2001, HHS has worked to prepare for and mitigate potential consequences resulting from the intentional or unintentional release of CBRN agents. This work has involved the efforts of various agencies, industry, and subject-matter experts to develop, acquire, and store medical countermeasures. The development of medical countermeasures generally begins with research on able-bodied adults, according to the National Commission on Children and Disasters. However, children have unique anatomical, physiological, and psychological differences that can predispose them in some circumstances to more serious or different adverse effects during public health emergencies compared to adults. HHS classifies children as part of the “at-risk population” and evaluates CBRN medical countermeasures for this group after developing them for the general adult population. The at-risk population generally has unique characteristics that may interfere with an individual’s ability to access or receive medical countermeasures. For example, individuals who have a limited ability to receive or respond to information because of hearing, vision, speech, or cognitive limitations would need to have information provided in such a way that they could understand it. In addition, before, during, and after an emergency, individuals may lose the support of caregivers, family, or friends. If separated from their caregivers, young children may be unable to identify themselves, and may lack the cognitive ability to assess situations and react appropriately. The National Commission on Children and Disasters and other experts in the field of pediatrics and emergency preparedness have reported that a disparity exists in the quality of adult and pediatric emergency care, especially in HHS’s efforts to acquire FDA-approved pediatric medical countermeasures. HHS leads the federal public health and medical response to potential CBRN incidents, including identifying needed medical countermeasures to prevent or mitigate potential health effects from exposure and engaging with industry to develop and acquire the countermeasures. The following agencies and offices within HHS have responsibilities related to medical countermeasures. ASPR is responsible for leading federal government efforts to research, develop, evaluate, and acquire medical countermeasures to diagnose, prevent, treat, or mitigate the potential health effects from exposure to CBRN agents. Within ASPR, BARDA, which was established by the Pandemic and All-Hazards Preparedness Act of 2006, is responsible for overseeing and funding advanced development and acquisition of CBRN medical countermeasures. NIH is responsible for conducting and coordinating basic and applied research to develop new or enhanced medical countermeasures for CBRN agents. FDA is responsible for regulating the development and approval of drugs, biologics, diagnostics, and devices, which includes assessing the safety, efficacy, and quality of CBRN medical countermeasures before approval and postmarket. CDC is responsible for maintaining the SNS and supporting state and local public health departments in their efforts to respond to public health emergencies, including providing guidance and recommendations for the mass distribution and use of medical countermeasures. Since 2004, the Department of Homeland Security (DHS), in consultation with the Secretary of HHS, has determined that certain CBRN agents pose a threat to the nation that could affect national security. HHS has used these material threat determinations to assess the potential public health and medical consequences of the CBRN agents, and to establish specific medical requirements for developing countermeasures. Assessing the medical consequences and establishing medical requirements are both interim steps in determining the types and quantities of medical countermeasures required to respond to the agents. CBRN agents differ from one another in their potential to cause widespread illness and death. In the event of a release of a CBRN agent, medical countermeasures may be needed for rapid diagnosis, treatment, and prevention of infection, illness, and injury, but the dose and formulation of the countermeasures needed for an individual may vary according to traits such as the individual’s age and weight, especially when considering the needs of children. For example, in some cases, it is desirable to have multiple formulations of countermeasures available— such as oral liquid suspensions and tablets—to facilitate patient compliance. There are an increasing number of available medical countermeasures to protect the nation against CBRN agents. Supplies of countermeasures that are available are generally held in the SNS for use in a public health emergency. The SNS is designed to supplement and resupply state and local public health departments in the event of a national public health emergency such as a CBRN incident. To acquire medical countermeasures for use during a CBRN incident where there is a lack of a significant commercial market, Congress authorized the appropriation of approximately $5.6 billion and the use of a Special Reserve Fund for the procurement of certain countermeasures. The Special Reserve Fund has been used to acquire CBRN medical countermeasures for the SNS. The Pandemic and All-Hazards Preparedness Reauthorization Act of 2013 reauthorizes the Special Reserve Fund and authorizes $2.8 billion over 5 years, from fiscal year 2014 through fiscal year 2018. In general, drugs, devices, diagnostics, and biologics such as vaccines, including CBRN countermeasures, cannot be marketed legally in the United States without FDA approval. To approve a countermeasure, FDA requires manufacturers to sufficiently demonstrate the safety, efficacy, and product quality of the countermeasure for the intended indication and population specified in the application. When products are approved, clinicians might prescribe them for unapproved indications or to unapproved populations, also referred to as “off-label use.” Generally, a doctor-patient relationship should exist for off-label dispensing of a medical countermeasure. In a CBRN incident, dispensing of countermeasures for approved or off-label indications may have to occur outside of the doctor-patient relationship.medicines and vaccines used in normal, standard medical care for the pediatric population are prescribed using this off-label practice. However, one possible consequence of off-label use of countermeasures by children is that some children, depending on weight or growth rates, may not receive the most appropriate dose of a medical countermeasure. In addition, some children may be at risk for side effects that are unique to According to HHS, many children, including adverse effects on their growth and development. To encourage the study of more drugs for pediatric use, Congress passed the Best Pharmaceuticals for Children Act in 2002, which provides financial incentives to product sponsors, and the Pediatric Research Equity Act of 2003. These laws encourage or require product sponsors to conduct pediatric studies, and also include labeling requirements, including labeling updates that result from pediatric drug studies. Although FDA regulates the use of medical countermeasures by approving their use by specific populations and for specific indications, during a public health emergency—such as a CBRN incident—FDA can authorize the use of those countermeasures by populations or for indications for which they have not been approved. This can occur in one of two ways. First, FDA can authorize the use of countermeasures that are unapproved, or approved but for a different indication, in a declared emergency to diagnose, treat, or prevent serious or life-threatening diseases or conditions caused by CBRN agents, when there are no adequate, approved, and available alternatives, and other criteria are met. This is referred to as an Emergency Use Authorization (EUA). In order for FDA to authorize the emergency use of a countermeasure, it must be reasonable to believe that, based on the evidence available, the product may be effective in diagnosing, treating, or preventing such disease or condition. In addition, the known and potential benefits of the product must outweigh the known and potential risks. 21 U.S.C. § 360bbb. The Secretary of HHS has delegated authority to make such determinations to the Commissioner of FDA. While INDs may be submitted for a variety of purposes, in this report we use the term IND to refer to only those submitted for expanded access to medical countermeasures during an emergency. centers and small biotechnology companies for basic and applied research and early development. Once an organization’s research is at the advanced development stage and moving toward the development of a product that will meet HHS’s specific requirements, it may partner with an established pharmaceutical or manufacturing company to continue the advanced development. Within HHS, the product transitions from NIH to BARDA to support its advanced development. If a countermeasure is not FDA-approved or licensed, its acquisition into the SNS is typically funded by the Project BioShield Special Reserve Fund. If a countermeasure is FDA-approved or licensed, CDC generally purchases the countermeasure for the SNS. Public health emergency response planning for CBRN incidents requires efforts at the federal, state, and local levels. Governments at each level have developed emergency response plans that outline their respective responsibilities during public health emergencies, which include responsibilities for distributing and dispensing medical countermeasures to the public. The federal government is responsible for planning the federal response to CBRN incidents. HHS has developed response plans for specific CBRN agents to help coordinate the federal response to such Federal assistance to state and local governments would emergencies.be provided if resources were unavailable or if state and local governments were overwhelmed and requested public health or medical assistance from the federal government. HHS would then direct CDC to distribute medical countermeasures from the SNS to the states. Once CDC distributes medical countermeasures from the SNS to a state, in most instances the state then distributes the countermeasures to local governments, based on established plans. State and local governments are responsible for developing plans for receiving, distributing, and dispensing medical countermeasures from the SNS. These plans are intended to describe all functions that are required to accomplish these tasks, in order to get medical countermeasures to the affected population as quickly and efficiently as possible. Some states plan to receive countermeasures and immediately turn them over to a local jurisdiction for staging, distributing, and dispensing during an emergency. Other states plan to receive medical countermeasures at a state warehouse facility and then deliver them directly to points of dispensing (POD) or hospitals. Each state is responsible for determining the best method for its circumstances and resources. Local governments, in turn, are responsible for receiving and dispensing medical countermeasures in a timely and efficient manner. About 60 percent of CBRN medical countermeasures in the SNS have been approved for children, but in many instances approval is limited to specific age groups. Specifically, PHEMCE officials stated that about 38 percent of the CBRN medical countermeasures in the SNS have been approved for children of all ages for treatment of certain CBRN threats. For example, ciprofloxacin and doxycycline as antimicrobials indicated for postexposure prophylaxis of inhalational anthrax, atropine as a treatment for exposure to nerve gas, and raxibacumab as an anthrax antitoxin have all been approved for use in the pediatric population for these indications. In addition, 22 percent of the CBRN medical countermeasures in the SNS have been approved for use by some, but not all, pediatric age groups for treatment of certain CBRN threats. For example, Prussian blue—a medical countermeasure that removes certain internalized radiological particles from the body—is approved for children ages 2 and above for that indication, but has not been approved for those less than 2 years of age. PHEMCE officials stated that the remaining 40 percent of the CBRN countermeasures have not been approved for any pediatric use. For example, anthrax vaccine adsorbed (AVA) has not been approved for use by any children. Furthermore, some of the CBRN medical countermeasures in the SNS have not been approved to treat individuals of any age for the specific indications for which they have been stockpiled. For example, ciprofloxacin is stockpiled in the SNS for the treatment of anthrax, plague, and tularemia, but is not approved for these indications. It is, however, approved for postexposure prophylaxis of inhalational anthrax in all age groups. In addition, for certain CBRN threats, there are no countermeasures available in the SNS because treatments for the conditions do not exist beyond supportive care. According to HHS officials, almost all medical countermeasures in the SNS can be used by children in an emergency if already approved for children or if FDA authorizes their use through an EUA or if an IND protocol is in effect. Even products that are not approved for a CBRN indication may be used in a public health emergency under an EUA— after an emergency has been declared—or under an IND protocol. HHS officials told us that almost all medical countermeasures that are not approved for use by the pediatric population can be used in an emergency under an EUA. However, some countermeasures, such as AVA vaccine, lack sufficient data to support their use by the pediatric population under an EUA. This vaccine would need to be administered to the pediatric population under an IND protocol. HHS officials explained that two different IND protocols for AVA have been developed in preparation for a potential anthrax-related public health emergency. The first protocol is designed to provide children access to the vaccine. It would require parents or guardians to sign consent forms to allow their children to receive anthrax vaccine during an emergency. The second IND protocol would require parents or guardians to sign consent forms to allow their children to participate in a research study after receiving the vaccine. (See app. I for additional information on the regulatory status of types of CBRN medical countermeasures in the SNS for the pediatric population.) Although HHS can provide information on the proportion of SNS countermeasures that can be used by children, it cannot provide information on the funds invested in procuring these countermeasures. HHS does not separately track the funds that it has invested in the acquisition and development of pediatric medical countermeasures because it does not account separately for investments related to the different populations. HHS procures CBRN medical countermeasures for the SNS based on, among other things, the list of material threats to the nation, public health response and medical consequence assessments, and available resources. Since 2004, BARDA has invested over $4 billion of Project BioShield’s Special Reserve Fund, which supports advanced development and manufacturing of potential CBRN medical countermeasures, in contracts to support medical countermeasure development in healthy and at-risk populations, including the pediatric population. In addition, between 2009 and 2012, CDC’s budget to maintain or acquire licensed CBRN medical countermeasures for the SNS was over $1.5 billion; however, no funds were specifically designated exclusively for products to be used only by the pediatric population. HHS faces economic, regulatory, scientific, and ethical challenges in its efforts to develop and acquire CBRN medical countermeasures for children in an emergency. Despite these challenges, HHS is taking steps to focus on the pediatric population and develop pediatric formulations of existing medical countermeasures. HHS faces a variety of interrelated challenges in developing and acquiring pediatric CBRN medical countermeasures, including economic, regulatory, scientific, and ethical issues that impede the ability of HHS to address the needs of children during a CBRN incident. Economic challenges facing HHS include the high failure rate of research, development, approval, and licensure of most drugs, vaccines, and diagnostic devices. Agency and industry officials told us that the risk of failure in developing countermeasures for children is even higher than the risk in developing them for the adult population. This risk, as well as a lack of a commercial market for most CBRN medical countermeasures, has made it difficult for HHS to attract companies willing to invest in such development and has therefore impeded HHS’s ability to acquire the needed countermeasures. In addition, various reports have stated that insufficient economic incentives are available to encourage the private sector to invest millions of dollars to develop potential new pediatric medical countermeasures.desirable to have oral liquid formulations available for young children who cannot swallow pills, it is not always practical to develop and acquire them for the SNS because such countermeasures for children can be more costly to procure and maintain, have shorter shelf lives, and may exceed manufacturer capability, as opposed to alternatives such as pill crushing and mixing that can be explored for all but the youngest children. CDC estimated that it would cost approximately $3 billion to purchase sufficient quantities of oral liquid formulations of countermeasures for the SNS to support the needs of all children—an amount well over CDC’s approximately $600 million annual budget for CBRN medical countermeasures over the past 5 years. Further, because the shelf life of liquids is shorter than that of tablets, oral liquid formulations would require more frequent investments. Finally, even if such funds were available, the manufacturing capacity to meet such acquisition requests may not be available. According to CDC officials, while it is The FDA regulatory pathway for developing pediatric CBRN medical countermeasures also poses unique challenges making it more difficult than for adult countermeasures. The Institute of Medicine, the National Biodefense Science Board, HHS, and FDA have stated that it is difficult to meet FDA’s requirements for data from adequate and well-controlled clinical investigations to support the approval of pediatric medical countermeasures because large, complex clinical trials are needed to prove safety and efficacy. As a result of this regulatory challenge, FDA has increasingly relied on alternative sources of data in order to approve, license, or authorize the use of medical countermeasures by children. For example, FDA allows researchers to submit evidence of efficacy obtained from historical inferences and appropriate studies in animals in accordance with FDA’s Animal Rule. Another challenge to developing countermeasures for children is that manufacturers are permitted, under certain circumstances, to avoid testing drugs for use by children. For example, manufacturers of CBRN countermeasures often seek an orphan drug designation for a new countermeasure, which, if granted, exempts the manufacturer from the requirement to conduct pediatric studies under the Pediatric Research Equity Act. There are scientific challenges in obtaining sufficient pediatric safety and efficacy information to appropriately inform the use of CBRN medical countermeasures for children. For example, FDA officials told us that extrapolating data to support efficacy information on a medical countermeasure from animal studies presents not only regulatory, but also complex scientific challenges to understanding how children would react to exposure to CBRN agents. NIH and industry officials told us that exposure in juvenile animal models is also not well understood. For example, extrapolating data from animal studies presents other scientific challenges in understanding the response to the medical countermeasures used to prevent or treat the disease or condition. Appropriate animal models have not yet been developed for many CBRN agents. In addition, the presentation of the disease or condition that humans manifest following exposure to a CBRN agent may not be the same as that for animals following exposure to the same CBRN agent, thus complicating the task of researchers that are relying on animal models. Further, there is an initial hurdle of extrapolating data from animal models for adults first; only after those data have been extrapolated and applied to the adult population can researchers extrapolate to children. For example, in order to develop or indicate a countermeasure for use in children, scientists generally take an existing countermeasure that has already been developed for adults and then adjust certain variables such as weight-based dosing and delivery mechanisms to test the safety and efficacy of the countermeasure in children. This is not straightforward because children may have special susceptibilities to the CBRN agent and special age- and weight-dependent responses to the medical countermeasures. Finally, HHS faces ethical challenges in its efforts to develop and acquire pediatric medical countermeasures because, absent a CBRN event, it is generally not ethical or feasible to obtain dosing, safety, and efficacy data for children when there is no potential direct benefit to them in the context of a clinical trial. FDA-regulated clinical trials that include children as subjects must consider both the risks to which a child may be exposed in a clinical investigation and whether the proposed intervention offers a prospect of direct benefit to the child. Because children can be enrolled as subjects in research only when directly necessary and when the research is ethically sound, industry officials we met with stated that researching CBRN medical countermeasures’ effect on children is nearly impossible. Industry officials told us that knowing that CBRN medical countermeasure research has a clear, direct benefit to a child participating in a study would always be unlikely because diseases caused by CBRN agents do not generally occur naturally. Although challenges persist in developing and acquiring pediatric medical countermeasures, HHS is beginning to address gaps in the SNS for pediatric medical countermeasures by focusing agency efforts on children, developing pediatric formulations of medical countermeasures in the SNS, and preparing and reviewing EUA and IND application materials in advance of emergencies. HHS is taking steps to focus department-wide efforts on children’s CBRN medical countermeasure needs. In 2010, FDA announced its Medical Countermeasures Initiative, which is intended to foster the development and availability of medical countermeasures, including those intended to be used by children. FDA officials said the initiative has improved the regulatory pathway for advancing the development and acquisition of medical countermeasures, for example, by clarifying and streamlining its review process and countermeasure requirements, which may entice manufacturers to develop new, novel medical countermeasures, including countermeasures with pediatric applicability. In addition, in 2010, HHS increased its focus on children’s needs with the establishment of the CHILD Working Group, which was formed to identify and integrate activities related to the needs of children across all HHS inter- and intragovernmental disaster planning activities and operations. The CHILD Working Group has developed recommendations for how HHS can improve the delivery of care to children who are affected by disasters. In 2011, in the area of medical countermeasures, the CHILD Working Group recommended that HHS provide clarity in the regulatory pathway for pediatric medical countermeasures; obtain the appropriate data, when available, to provide clinical pediatric dosing and use guidance for existing medical countermeasures; and gather safety and efficacy data from nontraditional sources to support the use of pediatric medical countermeasures under EUAs and for eventual FDA approval.According to HHS officials, many of the recommendations from the CHILD Working Group are being adopted by HHS. For example, in 2011, HHS developed the Pediatric Obstetric Integrated Program Team, which includes pediatric and obstetric subject-matter experts who advise PHEMCE on pediatric and obstetric medical countermeasure issues. This integrated program team is intended to recommend that pediatric medical countermeasure needs are consistently considered throughout the entire medical countermeasure development process and that pediatric subject- matter experts help consider complex ethical, scientific, and legal issues associated with studies that are necessary for the licensure and approval of medical countermeasures for children. In 2012 the integrated program team conducted a review of the contents of the SNS to determine the suitability of the contents for use by children, and it subsequently used the review to make recommendations to PHEMCE, and to petition for new medical countermeasure development. The content of the review is a work product of the integrated program team, and HHS has no plans to formally issue it. According to HHS, the findings and recommendations were considered during the 2012 SNS Annual Review. Further, in 2012, BARDA announced that where feasible and appropriate, it would be including development of medical countermeasures for the pediatric population as part of all base contracts moving forward. HHS has taken steps to support the development of CBRN medical countermeasure formulations for children, and has begun to base the pediatric dosing information on other evidence, such as by extrapolating from relevant and historical data of the countermeasure. Countermeasures are not approved for an indication unless data on the safety and efficacy of the countermeasure are available for a particular population, such as children. According to FDA officials, the agency has determined that in a smallpox emergency, the investigational smallpox vaccine under development may be authorized for use under an EUA in populations with compromised immune systems, including children. Specifically, in 2007 a second-generation smallpox vaccine, developed for persons determined to be at high risk for infection, was licensed based on data from clinical trials and the routine vaccination of infants in the United States through 1972. The vaccine was not studied in pediatric populations; however, this second-generation vaccine was similar to the vaccine that was routinely used to vaccinate infants in the United States through 1972 and had been demonstrated to be safe and effective in children. Therefore, FDA has determined that this vaccine could be used in pediatric populations under its license in a smallpox emergency. Similarly, pralidoxime chloride, prescribed as an antidote to treat nerve agent poisoning, has also been approved for use by children based on the extrapolation of efficacy data from both the adult and pediatric populations. In addition, FDA has used historical data from other countries to support EUAs or product approvals for pediatric indications when analogous U.S. data were neither available nor obtainable. For example, according to FDA officials, in 1987, a radiological incident in Brazil provided the majority of the pediatric data that FDA reviewed to assess the safety and efficacy of a radiological countermeasure. The data included the use of the countermeasure by both adults and children. The review allowed FDA to approve the countermeasure for children ages 2 and older in 2003. PHEMCE has also encouraged adapting and manufacturing of oral liquid formulations of existing medical countermeasures in order to ease dispensing of countermeasures in the SNS for children. For example, BARDA is contracting with industry partners to manufacture a liquid form of a countermeasure that removes certain radioactive particles from the body through the intestinal tract. Additionally, PHEMCE is considering lowering the recommended age at which children should be administered oral liquid suspensions of doxycycline in lieu of crushed tablets. CDC officials told us that they would like to have more oral liquid suspensions considered for the SNS; however, PHEMCE officials have reported that crushing tablets would be an acceptable alternative and would have benefits for storage, dispensing, and dosing. As a result, FDA and CDC have developed instructions for crushing certain approved medical countermeasures for use by people who cannot swallow tablets or capsules. The crushing instructions include instructions for mixing the countermeasures with food or drink to make them more palatable to children. Crushing instructions generally include weight-based dosing instructions, the number of doses required per day, and instructions for how to crush pills and mix them. Relevant component agencies within HHS collaborate to prepare and review materials for EUAs and INDs in advance of public health emergencies to ensure that sufficient data are available to support authorizations for the use of certain medical countermeasures by children. According to CDC officials, the agency assesses, on a regular basis, the contents in the SNS and updates and assembles data and information for those countermeasures that are not approved for use by children so that in the event of a public health emergency, the contents can be disseminated to states quickly. Specifically, to prepare for such an emergency, CDC collaborates with BARDA, NIH, FDA, and manufacturers to develop the EUA and IND submissions in advance of an actual incident, so that all medical countermeasures in the SNS can be used by children. In addition, they consult with the American Academy of Pediatrics on pediatric issues. More than half of HHS’s emergency response plans that we examined included information about pediatric medical countermeasures. CDC and FDA developed guidance on pediatric dispensing for state and local government use. The state and local plans we examined also provided details about dispensing to the pediatric population during an emergency. Of the 12 HHS CBRN response plans we reviewed, more than half included information about dispensing pediatric medical countermeasures. Specifically, 7 of these threat-specific plans contained information about medical countermeasures that could be dispensed for use by children in the event of a CBRN incident. The type of information included varied by plan. For example, two of the plans for responding to biological threats identified preferred and alternative countermeasures, and the appropriate dosages, that should be dispensed to children during an event. For a chemical incident, one of the plans indicated a premedication that should be used in pediatric patients before intubating them.included information about medical countermeasures that could be used by children, although one plan noted that an EUA would be required before some of the countermeasures could be dispensed to the pediatric population. Although more than half of HHS’s response plans included information about dispensing specific countermeasures to children, HHS officials told us that the purpose of these plans is to provide guidance for emergency responses at the federal level, and not instructions for use at the state and local level, which is where dispensing to children would The response plans for nuclear or radiological incidents also occur. While these plans are intended primarily to support the federal response, they could also be used by state and local governments to inform their activities as part of planning their own response to a public health emergency. HHS officials told us that they were moving away from the use of separate, threat-specific response plans and were developing a single “all-hazards” response plan. This all-hazards plan would include sections for responding to specific CBRN events. According to HHS officials, these sections would address pediatric dispensing, including EUA and IND requirements. Additionally, a briefing paper about the pediatric population in disasters would be included in the all-hazards plan. Both CDC and FDA developed guidance for the dispensing of CBRN medical countermeasures from the SNS to the public, including children. For example, CDC developed guidance about receiving, distributing, and dispensing contents from the SNS to help state and local emergency management and public health personnel plan for the use of countermeasures from the SNS. The guidance, which could be used as a reference document or as a checklist by local and state planners, referred to the pediatric population in multiple sections, including an appendix about pediatric dispensing considerations. For example, the guidance described how the color “pink” is used to identify pediatric supplies included in the SNS. Additionally, the appendix noted that the SNS has limited amounts of oral suspensions of certain countermeasures for use by children and offered potential solutions for state and local governments to consider when addressing this shortage. These solutions included assigning someone the responsibility of mixing suspensions at a POD site or compounding tablets of medical countermeasures into oral suspensions. In addition, CDC and FDA developed other guidance on dispensing medical countermeasures, including to the pediatric population, which could be shared with and used by state and local governments. For example, CDC developed a website and training opportunities for state and local governments to use when planning for dispensing medical countermeasures. The information shared through the website and training addresses dispensing in general, for the most part, although a small portion of the training touches on dispensing to the pediatric population. FDA also developed guidance on dispensing pediatric medical countermeasures. For example, because it can be difficult for children to swallow pills, FDA developed instructions for how to crush and mix doxycycline with water and then add to food to mask the taste of doxycycline. Additionally, FDA, with CDC, developed an information sheet about using doxycycline for the prevention of anthrax that state and local governments could share with their populations during an event. This sheet included dosing instructions for administering doxycycline to children. CDC and FDA also collaborated to prepare, in advance of an actual CBRN incident, information about medical countermeasures that require an EUA or IND to have ready to share with state and local governments should an emergency occur.countermeasure would be dispensed to children under an EUA or IND protocol, should there be a need. State and local governments could This included information about how a then, in turn, share this information with their populations in the event of an emergency. For example, CDC and FDA developed an EUA fact sheet with instructions for administering doxycycline to children at home. CDC officials told us that they have information available for all medical countermeasures that could be used by children under an EUA. Information about pediatric dispensing under an EUA or IND protocol would be sent electronically—rather than in hard copy—to state and local governments because dispensing information can change and hard copies could become outdated. State and local governments have provided details about pediatric dispensing in their emergency response plans. All seven of the state response plans we reviewed addressed the dispensing of countermeasures to the pediatric population during a CBRN incident. Although the states’ plans varied in format, they were consistent with one another in terms of the type of information they included about dispensing to children. For example, more than half of the states included information about pediatric dosing or formulations. Additionally, all seven states adopted some version of a “family member pick-up” policy—sometimes referred to as a “head of household” policy—which would allow adults to pick up medicines for other family members, including children, during an event. This policy is intended to eliminate disruptions to the dispensing process while simultaneously reducing patient numbers and increasing the number of persons treated during an incident. In addition, each state’s plan provided other information about how medical countermeasures would be dispensed to children, either under a family member pick-up policy or through a POD. One state’s medical countermeasure dispensing guidance noted that a POD should have specialized items for dispensing to children, such as scales for weighing children (if they are present and their parents do not know their weights) and mixing equipment to make pediatric portions. Additionally, the state’s Point of Dispensing Field Operations Guide suggested that the POD include a staff member who can be responsible for ensuring drugs are properly packaged and instructions for children are given. One state’s plan discussed how oral antibiotic suspensions and syrups would be provided for the treatment of children who have trouble swallowing tablets, and that converting ciprofloxacin and doxycycline tablets into oral suspensions was recommended as an alternative for providing additional quantities of pediatric prophylactic regimens, due to the limited quantities of oral suspensions in the SNS. The plan included instructions for reconstituting these medical countermeasures. One state’s plan provided information about dispensing countermeasures to unaccompanied children who request treatment at PODs. The issue of consent for emergency care to a child in a disaster was discussed. Officials from this state told us that it works closely with the state’s health care coalitions to ensure that the regional guidelines include pediatric-focused strategies. Additionally, because pediatric doses are not stored in the SNS in large quantities, the state and local jurisdictions rely on partnerships with community and chain pharmacies to compound and reconstitute medications for children in an emergency. As we found with the states, all seven of the local governments that provided us with plans addressed dispensing countermeasures for the pediatric population during a CBRN incident. Like the states’ plans, the local governments’ plans varied in their format but were consistent with one another in terms of the type of information they included about dispensing to children. For example, more than half of the plans included information about screening children at a POD. Additionally, all seven local governments plan to implement versions of family member pick-up policies in the event of an emergency. The local governments’ plans also included other ways to address the needs of the pediatric population when dispensing medical countermeasures. For example, One local government’s POD plan described different types of dispensing, one of which is called “slow dispensing.” Slow dispensing requires children under the age of 9 to pass through medical screenings, if they go to the POD. Another local government’s plan described the setup of a special assistance station at a POD, where individuals could obtain medication for children under 9 years of age. Special assistance personnel could determine appropriate pediatric dosing by referring to available weight charts. Finally, one local government’s plan discussed allowing “fast-tracking protocols,” which would allow individuals with children to be diverted from the main lines and directed to a Help Desk to receive assistance. CDC officials noted that state and local governments handle the dispensing of countermeasures to children in the same way as for adults; that is, the dispensing of countermeasures for both children and adults occurs at POD sites. However, CDC officials also told us that special considerations can still be made for children—for example, by establishing separate family lines in the PODs. We found examples of this consideration in some of the plans we reviewed. We provided a draft of this report to HHS for comment. In its written comments, reproduced in appendix II, HHS concurred with our findings. HHS reiterated information provided in our report, including that the development, procurement, and dispensing of medical countermeasures for the pediatric population is integrated into the PHEMCE’s framework for public health preparedness across multiple component agencies, that state and local jurisdictions play an important role in responding to public health emergencies, and that the pace of progress in drug development is limited by the complex issues that surround the testing of countermeasures in children. In addition, HHS emphasized that the needs of the pediatric population have been a priority for HHS since the origins of Project Bioshield, and that the department is continuously progressing in this area. HHS also provided technical comments that we incorporated as appropriate. We will send copies of this report to the Secretary of Health and Human Services and interested congressional committees. We will also make copies available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Table 1 presents additional information about certain types of chemical, biological, radiological, and nuclear (CBRN) medical countermeasures available in the Strategic National Stockpile (SNS) by CBRN threat, as well as their regulatory status, for the pediatric population. The information presented is a general overview of countermeasures that are available for the pediatric population in the event of a CBRN incident, and does not include a complete list of all of the variations of each countermeasure. In addition to the contact named above, Tom Conahan, Assistant Director; Kaitlin Coffey; Kelly DeMots; Carolina Morgan; Monica Perez-Nelson; and Roseanne Price made key contributions to this report. National Preparedness: Improvements Needed for Measuring Awardee Performance in Meeting Medical and Public Health Preparedness Goals. GAO-13-278. Washington, D.C.: March 22, 2013. National Preparedness: Countermeasures for Thermal Burns. GAO-12-304R. Washington, D.C.: February 22, 2012. Chemical, Biological, Radiological, and Nuclear Risk Assessments: DHS Should Establish More Specific Guidance for Their Use. GAO-12-272. Washington, D.C.: January 25, 2012. National Preparedness: Improvements Needed for Acquiring Medical Countermeasures to Threats from Terrorism and Other Sources. GAO-12-121. Washington, D.C.: October 26, 2011. National Preparedness: DHS and HHS Can Further Strengthen Coordination for Chemical, Biological, Radiological, and Nuclear Risk Assessments. GAO-11-606. Washington, D.C.: June 21, 2011. Public Health Preparedness: Developing and Acquiring Medical Countermeasures Against Chemical, Biological, Radiological, and Nuclear Agents. GAO-11-567T. Washington, D.C.: April 13, 2011. National Security: Key Challenges and Solutions to Strengthen Interagency Collaboration. GAO-10-822T. Washington, D.C.: June 9, 2010. Combating Nuclear Terrorism: Actions Needed to Better Prepare to Recover from Possible Attacks Using Radiological or Nuclear Materials. GAO-10-204. Washington, D.C.: January 29, 2010. Project BioShield Act: HHS Has Supported Development, Procurement, and Emergency Use of Medical Countermeasures to Address Health Threats. GAO-09-878R. Washington, D.C.: July 24, 2009. Project BioShield: HHS Can Improve Agency Internal Controls for Its New Contracting Authorities. GAO-09-820. Washington, D.C.: July 21, 2009. Project BioShield: Actions Needed to Avoid Repeating Past Problems with Procuring New Anthrax Vaccine and Managing the Stockpile of Licensed Vaccine. GAO-08-88. Washington, D.C.: October 23, 2007. | The nation remains vulnerable to terrorist and other threats posed by CBRN agents. Medical countermeasures--drugs, vaccines, and medical devices--can prevent or treat the effects of exposure to CBRN agents, and countermeasures are available in the SNS for some of these agents. Children, who make up 25 percent of the population in the United States, are especially vulnerable because many of the countermeasures in the SNS have only been approved for use in adults. HHS leads the federal efforts to develop and acquire countermeasures. GAO was asked about efforts to address the needs of children in the event of a CBRN incident. This report examines (1) the percentage of CBRN medical countermeasures in the SNS that are approved for pediatric use; (2) the challenges HHS faces in developing and acquiring CBRN medical countermeasures for the pediatric population, and the steps it is taking to address them; and (3) the ways that HHS has addressed the dispensing of pediatric medical countermeasures in its emergency response plans and guidance, and ways that state and local governments have addressed this issue. To address these objectives, GAO reviewed relevant laws, agency documents, and reports, and interviewed HHS officials, industry representatives, and subject-matter experts. GAO also reviewed a stratified sample of emergency response plans from seven state and seven local governments, based on geographic location and population size, to assess how these governments address pediatric dispensing. According to the Department of Health and Human Services (HHS), about 60 percent of the chemical, biological, radiological, and nuclear (CBRN) medical countermeasures in the Strategic National Stockpile (SNS) have been approved for children, but in many instances approval is limited to specific age groups. In addition, about 40 percent of the CBRN countermeasures have not been approved for any pediatric use. Furthermore, some of the countermeasures have not been approved to treat individuals for the specific indications for which they have been stockpiled. For example, ciprofloxacin is stockpiled in the SNS for the treatment of anthrax, plague, and tularemia, but is not approved for these indications. Countermeasures may be used to treat unapproved age groups or indications under an emergency use authorization (EUA) or an Investigational New Drug (IND) application submitted to the Food and Drug Administration (FDA). HHS faces a variety of economic, regulatory, scientific, and ethical challenges in developing and acquiring pediatric CBRN medical countermeasures. High costs and the high risk of failure associated with testing and research of pharmaceutical products on children, difficulties in meeting regulatory requirements for approving CBRN countermeasures, and scientific and ethical obstacles to safely evaluating countermeasures for children all pose challenges to developing pediatric countermeasures. Despite these challenges, HHS has taken steps to focus agency efforts on the pediatric population, adapt pediatric formulations from existing medical countermeasures, and prepare and review materials for EUAs and INDs in advance of public health emergencies. HHS addresses dispensing of pediatric medical countermeasures in more than half of its 12 response plans and in its guidance, and seven state and seven local government plans that GAO reviewed included details about pediatric dispensing. Seven of the 12 HHS plans include information about pediatric medical countermeasures; however, HHS officials stated that these plans are intended to provide guidance for emergency response at the federal level, and not at the state or local levels, which is where dispensing would occur. CDC and FDA also provide guidance on pediatric dispensing that state and local governments can use in their planning. For example, CDC developed guidance about receiving, distributing, and dispensing contents from the SNS to help state and local emergency management and public health personnel plan for the use of countermeasures from the SNS. Response plans for all 14 of the state and local governments that GAO reviewed also included details about dispensing to the pediatric population during an emergency. For example, these seven states and seven local governments all adopted some version of a "family member pick-up" policy--sometimes referred to as a "head of household" policy--which would allow adults to pick up medicines for other family members, including children, during an event. In commenting on a draft of this report, HHS concurred with our findings. HHS emphasized that the needs of the pediatric population have been a priority for HHS and that the department is continuously progressing in this area. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Since September 11, 2001, there has been broad acknowledgment by the federal government, state and local governments, and a range of independent research organizations of the need for a coordinated intergovernmental approach to allocating the nation’s resources to address the threat of terrorism and improve our security. This coordinated approach includes developing national guidelines and standards and monitoring and assessing preparedness against those standards to effectively manage risk. The National Strategy for Homeland Security (National Strategy), released in 2002 following the proposal for DHS, emphasized a shared national responsibility for security involving close cooperation among all levels of government and acknowledged the complexity of developing a coordinated approach within our federal system of government and among a broad range of organizations and institutions involved in homeland security. The national strategy highlighted the challenge of developing complementary systems that avoid unintended duplication and increase collaboration and coordination so that public and private resources are better aligned for homeland security. The national strategy established a framework for this approach by identifying critical mission areas with intergovernmental initiatives in each area. For example, the strategy identified such initiatives as modifying federal grant requirements and consolidating funding sources to state and local governments. The strategy further recognized the importance of assessing the capability of state and local governments, developing plans, and establishing standards and performance measures to achieve national preparedness goals. Recent reports by independent research organizations have highlighted the same issues of the need for intergovernmental coordination, planning, and assessment. For example, the fifth annual report of the Advisory Panel to Assess Domestic Response Capabilities for Terrorism Involving Weapons of Mass Destruction (the Gilmore Commission) also emphasizes the importance of a comprehensive, collaborative approach to improve the nation’s preparedness. The report states that there is a need for a coordinated system for the development, delivery, and administration of programs that engage a broad range of stakeholders. The Gilmore Commission notes that preparedness for combating terrorism requires measurable demonstrated capacity by communities, states, and the private sector to respond to threats with well-planned, well-coordinated, and effective efforts by all participants. The Gilmore Commission recommends a comprehensive process for establishing training and exercise standards for responders that includes state and local response organizations on an ongoing basis. The National Academy of Public Administration’s recent panel report also notes the importance of coordinated and integrated efforts at all levels of government and in the private sector to develop a national approach to homeland security. Regarding assessment, the report recommends establishing national standards in selected areas and developing impact and outcome measures for those standards. The creation of DHS was an initial step toward reorganizing the federal government to respond to some of the intergovernmental challenges identified in the national strategy. The reorganization consolidated 22 agencies with responsibility for domestic preparedness functions to, among other things, enhance the ability of the nation’s police, fire, and other first responders to respond to terrorism and other emergencies through grants. Many aspects of DHS’s success depend on its maintaining and enhancing working relationships within the intergovernmental system as the department relies on state and local governments to accomplish its mission. The Homeland Security Act contains provisions intended to foster coordination among levels of government, such as the creation of the Office of State and Local Government Coordination and ONCRC. The Homeland Security Act established ONCRC within DHS to oversee and coordinate federal programs for, and relationships with, state, local, and regional authorities in the National Capital Region. Pursuant to the act, ONCRC’s responsibilities include coordinating the activities of DHS relating to NCR, including cooperating with the Office for State and Local Government Coordination; assessing and advocating for resources needed by state, local, and regional authorities in NCR to implement efforts to secure the homeland; providing state, local, and regional authorities in NCR with regular information, research, and technical support to assist the efforts of state, local, and regional authorities in NCR in securing the homeland; developing a process for receiving meaningful input from state, local, and regional authorities and the private sector in NCR to assist in the development of the federal government’s homeland security plans and activities; coordinating with federal agencies in NCR on terrorism preparedness to ensure adequate planning, information sharing, training, and execution of the federal role in domestic preparedness activities; coordinating with federal, state, and regional agencies and the private sector in NCR on terrorism preparedness to ensure adequate planning, information sharing, training, and execution of domestic preparedness activities among these agencies and entities; and serving as a liaison between the federal government and state, local, and regional authorities, and private sector entities in NCR to facilitate access to federal grants and other programs. The act also requires ONCRC to submit an annual report to Congress that includes the identification of resources required to fully implement homeland security efforts in NCR, an assessment of the progress made by NCR in implementing homeland security efforts in NCR, and recommendations to Congress regarding the additional resources needed to fully implement homeland security efforts in NCR. The first ONCRC Director served from March to November 2003, and the Secretary of DHS appointed a new Director on April 30, 2004. The ONCRC has a small staff including full-time and contract employees and staff on detail to the office. NCR is a complex multijurisdictional area comprising the District of Columbia and surrounding counties and cities in the states of Maryland and Virginia and is home to the federal government, many national landmarks, and military installations. Coordination within this region presents the challenge of working with eight NCR jurisdictions that vary in size, political organization, and experience with managing emergencies. The largest municipality in the region is the District of Columbia, with a population of about 572,000. However, the region also includes large counties, such as Montgomery County, Maryland, with a total population of about 873,000, incorporating 19 municipalities, and Fairfax County, Virginia, the most populous jurisdiction (about 984,000), which is composed of nine districts. NCR also includes smaller jurisdictions, such as Loudoun County and the City of Alexandria, each with a population below 200,000. The region has significant experience with emergencies, including natural disasters such as hurricanes, tornadoes, and blizzards, and terrorist incidents such as the attacks of September 11, and subsequent events, and the sniper incidents of the fall of 2002. For more details on the characteristics of the individual jurisdictions, see table 1. In fiscal years 2002 and 2003, Congress provided billions of dollars in grants to state and local governments to enhance the ability of the nation’s first responders to prevent and respond to terrorism events. We reviewed 16 of the funding sources available for use by first responders and emergency managers that were targeted for improving preparedness for terrorism and other emergencies. In fiscal years 2002 and 2003, these grant programs, administered by DHS, Health and Human Services (HHS), and Justice awarded about $340 million to the District of Columbia, Maryland, Virginia, and state and local emergency management, law enforcement, fire departments, and other emergency response agencies in NCR. Table 2 shows the individual grant awards to the jurisdictions. The funding sources we reviewed include a range of grants that can be used for broad purposes, such as ODP’s State Homeland Security Grant Program and the Federal Emergency Management Agency (FEMA) Emergency Management Performance Grant, as well as more targeted grants for specific disciplines such as FEMA’s Assistance to Firefighters Grant and HHS’s Bioterrorism Preparedness Grants. While some of these grants are targeted to different recipients, many of them can be used to fund similar projects and purposes. For example, there are multiple grants that can be used to fund equipment, training, and exercises. We have previously reported the fragmented delivery of federal assistance can complicate coordination and integration of services and planning at state and local levels. Multiple fragmented grant programs can create a confusing and administratively burdensome process for state and local officials seeking to use federal resources for homeland security needs. In addition, many of these grant programs have separate administrative requirements such as applications and different funding and reporting requirements. In fiscal year 2004, in an effort to reduce the multiplicity of separate funding sources and to allow greater flexibility in the use of grants, several ODP State and Local Domestic Preparedness grants, which were targeted for separate purposes such as equipment, training, and exercises, were consolidated into a single funding source and renamed the State Homeland Security Grant Program. In addition, four FEMA grants (Citizen Corps, Community Emergency Response Teams, Emergency Operations Centers, and State and Local All-Hazards Emergency Operations Planning) now have a joint application process; the same program office at FEMA administers these grants. Overall, NCR jurisdictions used the 16 funding sources we reviewed to address a wide variety of emergency preparedness activities such as (1) purchasing equipment and supplies; (2) training first responders; (3) planning, conducting, and evaluating exercises; (4) planning and administration; and (5) providing technical assistance. Table 3 shows the eligible uses for each of the 16 grants. Of the $340 million awarded for the 16 funding sources, the two largest funding sources—which collectively provided about $290.5 million (85 percent) in federal funding to NCR—were the Fiscal Year 2002 Department of Defense (DOD) Emergency Supplemental Appropriation and the Fiscal Year 2003 Urban Area Security Initiative. Both of these sources fund a range of purposes and activities such as equipment purchases, including communications systems; training and exercises; technical assistance; and planning. The Fiscal Year 2002 DOD Emergency Supplemental Appropriation, which was provided in response to the attacks of September 11, 2001, provided approximately $230 million to enhance emergency preparedness. Individual NCR jurisdictions independently decided how to use these dollars and used them to fund a wide array of purchases to support first responders and emergency management agencies. Our review of the budgets for this appropriation submitted by NCR jurisdictions showed that many of these grant funds were budgeted for communications equipment and other equipment and supplies. Table 4 provides examples of major projects funded by each jurisdiction with these funds. In 2003, DHS announced a new source of funding targeted to large urban areas under UASI to enhance the ability of metropolitan areas to prepare for and respond to threats or incidents of terrorism. This initiative included a total of $60.5 million to NCR, which was one of seven metropolitan areas included in the initial round of funding. The cities were chosen by applying a formula based on a combination of factors, including population density, critical infrastructure, and threat/vulnerability assessment. UASI’s strategy for NCR includes plans to fund 21 individual lines of effort for the region in the areas of planning, training, exercises, and equipment. In addition, funds are provided for administration and planning and to reimburse localities for changing levels of homeland security threat alerts. Table 5 summarizes the planned use of the UASI funds. Effectively managing first responder federal grant funds requires the ability to measure progress and provide accountability for the use of public funds. As with other major policy areas, demonstrating the results of homeland security efforts includes developing and implementing strategies, establishing baselines, developing and implementing performance goals and data quality standards, collecting reliable data, analyzing the data, assessing the results, and taking action based on the results. This strategic approach to homeland security includes identifying threats and managing risks, aligning resources to address them, and assessing progress in preparing for those threats and risks. Without a NCR baseline on emergency preparedness, a plan for prioritizing expenditures and assessing their benefits, and reliable information on funds available and spent on first responder needs in NCR, it is difficult for ONCRC to fulfill its statutory responsibility to oversee and coordinate federal programs and domestic preparedness initiatives for state, local, and regional authorities in NCR. Regarding first responders, the purpose of these efforts is to be able to address three basic, but difficult, questions: “For what types of threats and emergencies should first responders be prepared?” “What is required— coordination, equipment, training, etc.—to be prepared for these threats and emergencies?” “How do first responders know that they have met their preparedness goals?” NCR is an example of the difficulties of answering the second and third questions in particular. ONCRC and its jurisdictions face three interrelated challenges that limit their ability to jointly manage federal funds in a way that demonstrates increased first responder capacities and preparedness while minimizing inefficiency and unnecessary duplication of expenditures. First and most fundamental are the lack of preparedness standards and a baseline assessment of existing NCR-wide first responder capacities that is linked to those standards. As in other areas of the nation generally, NCR does not have a set of accepted benchmarks (best practices) and performance goals that could be used to identify desired goals and determine whether first responders have the ability to respond to threats and emergencies with well-planned, well-coordinated, and effective efforts that involve police, fire, emergency medical, public health, and other personnel from multiple jurisdictions. The Gilmore Commission’s most recent report noted that there is a continuing problem of a lack of clear guidance from the federal level about the definition and objectives of preparedness, a process to implement those objectives, and how states and localities will be evaluated in meeting those objectives. The report states the need for a coordinated system for the development, delivery, and administration of programs that engages a broad range of stakeholders. Over the past few years, some state and local officials and independent research organizations have expressed an interest in some type of performance standards or goals that could be used as guidelines for measuring the quality and level of first responder preparedness, including key gaps. However, in discussing “standards” for first responders, it is useful to distinguish between three different types of measures that are often lumped together in the discussion of standards. Functional standards generally set up to measure such things as functionality, quantity, weight, and extent and in the context of first responders generally apply to equipment. Examples include the number of gallons of water per minute that a fire truck can deliver or the ability of a biohazard suit to filter out specific pathogens, such as anthrax. Benchmarks are products, services, or work processes that are generally recognized as representing best practices for the purposes of organizational improvement. An example might be joint training of fire and police for biohazard response—a means of achieving a specific performance goal for responding to biohazard threats and incidents. Performance goals are measurable objectives against which actual achievement may be compared. An example might be the number of persons per hour who could be decontaminated after a chemical attack. Realistic training exercises could then be used to test the ability to meet that objective. Homeland security standards should include both functional standards and performance goals. In February 2004, DHS adopted its first set of functional standards for protective equipment. The eight standards, previously developed by the National Institute for Occupational Safety and Health (NIOSH) and the National Fire Protection Association (NFPA), are intended to provide minimum requirements for equipment. These standards include NIOSH standards for three main categories of chemical, biological, radiological, and nuclear (CBRN) respiratory protection equipment and five NFPA standards for protective suits and clothing to be used in responding to chemical, biological, and radiological attacks. Performance and readiness standards are more complicated and difficult to develop than functional standards. In a large, diverse nation, not all regions of the nation require exactly the same level of preparedness because, for example, not all areas of the nation face the same types and levels of risks and, thus, first responder challenges. For example, first responder performance goals and needs are likely to be different in New York City and Hudson, New York. Thus, different levels of performance goals may be needed for different types and levels of risk. Recently, the administration has focused more attention on the development of homeland security standards, including the more difficult performance goals or standards. For example, DHS’s recently issued strategic plan makes reference to establishing, implementing, and evaluating capabilities through a system of national standards. Homeland Security Presidential Directive 8 (December 2003) requires the development of a national preparedness goal to include readiness metrics and a system for assessing the nation’s overall preparedness by the fiscal year 2006 budget submission. The lack of benchmarks and performance goals may contribute to difficulties in meeting the second challenge in NCR—developing a coordinated regionwide plan for determining how to spend federal funds received and assess the benefit of that spending. A strategic plan for the use of homeland security funds—whether in NCR or elsewhere—should be based on established priorities, goals, and measures and align spending plans with those priorities and goals. At the time of our review, such a strategic plan had yet to be developed. Although ONCRC had developed a regional spending plan for the UASI grants, this plan was not part of a broader coordinated plan for spending federal grant funds and developing first responder capacity and preparedness in NCR. The former ONCRC Director said that ONCRC and the Senior Policy Group could have a greater role in overseeing the use of other homeland security funds in the future. There is no established process or means for regularly and reliably collecting and reporting data on the amount of federal funds available to first responders in each of NCR’s eight jurisdictions, the planned and actual use of those funds, and the criteria used to determine how the funds would be spent. Reliable data are needed to establish accountability, analyze gaps, and assess progress toward meeting established performance goals. Credible data should also be used to develop and revise plans and to set goals during the planning process. Were these data available, the lack of standards against which to evaluate the data would make it difficult to assess gaps. It should be noted that the fragmented nature of the multiple federal grants available to first responders—some awarded to states, some to localities, some directly to first responder agencies—may make it more difficult to collect and maintain regionwide data on the grant funds received and the use of those funds in NCR. Our previous work suggests that this fragmentation in federal grants may reinforce state and local fragmentation and can also make it more difficult to coordinate and use those multiple sources of funds to achieve specific objectives. NCR jurisdictions completed the Office for Domestic Preparedness State Homeland Security Assessment (ODP assessment) in the summer of 2003. At the time of our review, NCR jurisdictions said that they had not received any feedback from ODP or ONCRC on the review of those assessments. Preparedness expectations should be established based on likely threat and risk scenarios and an analysis of the gap between current and needed capabilities based on national guidelines. In keeping with the requirement of the Homeland Security Act that DHS conduct an assessment of threats and state and local response capabilities, risks, and needs with regard to terrorist incidents, DHS developed the ODP State Homeland Security Assessment and Strategy Program. The ODP assessment was aligned with the six critical mission areas in the National Strategy for Homeland Security, and generally followed the structure of a risk management approach. The assessment used the same scenarios for all jurisdictions nationwide, allowing ODP to compare different jurisdictions using the same set of facts and assumptions. Of course, the scenarios used may not be equally applicable to all jurisdictions nationwide. The assessment collected data in three major areas: risk, capability, and needs related to terrorism prevention. The risk assessment portion includes threat and vulnerability assessments. The capability assessment includes discipline-specific tasks for weapons of mass destruction (WMD) events. The needs assessment portion covers five functional areas of planning, organization, equipment, training, and exercises. Supporting materials and worksheets on a threat profile, capability to respond to specific WMD, an equipment inventory, and training needs are provided to assist local jurisdictions in completing the assessment. A feedback loop is a key part of a risk management process. It involves evaluating the assessment results to inform decision making and establish priorities; it is not clear how the results of the assessment were used to complete this process for NCR. ONCRC did not present any formal analysis of the gap in capabilities identified by the assessment, and several NCR jurisdictions said they did not receive any feedback on the results of the assessment for their individual jurisdictions. The former ONCRC Director said that the results of the assessment for each of the NCR jurisdictions were combined to establish priorities and develop the strategy for the use of the UASI funds, but he did not provide any information on how the individual assessments were combined or the methodology used to analyze the assessment results. While the former Director said the results of the assessment were used to develop the plan for the use of the UASI funds within NCR, he said that they were not applied beyond that one funding source to establish priorities for the use of other federal grants. While the NCR jurisdictions had emergency coordination practices and procedures, such as mutual aid agreements, in place long before September 11,2001, the terrorist attacks and subsequent anthrax events in NCR highlighted the need for better coordination and communication within the region. As a result, WashCOG developed a regional emergency coordination plan (RECP) to facilitate coordination and communication for regional incidents or emergencies. While this new plan and the related procedures represent efforts to improve coordination, more comprehensive planning would include a coordinated regional approach for the use of federal homeland security funds. NCR is one of the first regions in the country to prepare a regional emergency coordination plan. The plan is intended to provide structure through which the NCR jurisdictions can collaborate on planning, communication, information sharing, and coordination activities before, during, and after a regional emergency. RECP, which is based on FEMA’s Federal Response Plan, identifies 15 specific regional emergency support functions, including transportation, hazardous materials, and law enforcement. The Regional Incident Communication and Coordination System (RICCS), which is included in the WashCOG plan, provides a system for WashCOG members, the state of Maryland, the Commonwealth of Virginia, the federal government, public agencies, and others to collaborate in planning, communicating, sharing information, and coordinating activities before, during, and after a regional incident or emergency. RICCS relies on multiple means of communication, including conference calling, secure Web sites, and wireless communications. The system has been used on several occasions to notify local officials of such events as a demonstration in downtown Washington, D.C., and the October 2002 sniper incidents. For example, RICCS allowed regional school systems to coordinate with one another regarding closure policies during the sniper events. Our work in NCR found that no regional coordination methods have been developed for planning for the use of 15 of the 16 funding sources we reviewed. While the region has experience with working together for regional emergency preparedness and response, NCR officials told us that they have not worked together to develop plans and coordinate expenditures for the use of federal funds. Most NCR jurisdictions did not have a formal overall plan for the use of these funds within their individual jurisdictions. In addition, while the grant recipients are required to report to the administering federal agencies on each individual grant, DHS and ONCRC have not implemented a process to collect and analyze the information reported for NCR as a whole. The one exception to this lack of coordination is UASI, for which ONCRC developed a regional plan for the use of the funds. Internal control standards support developing documentation, such as plans, to assist in controlling management operations and making decisions. Without this type of documentation, it is difficult for ONCRC to monitor the overall use of funds within NCR and to evaluate their effectiveness and plan for future use of grant funds. While some NCR and ONCRC officials said that there was a need for DHS and the NCR jurisdictions to establish controls over how emergency preparedness grant funds are used in the region, they did not indicate any plans to do so. Within NCR, planning for the use of federal emergency and homeland security grant funds is generally informal and is done separately by each of the NCR jurisdictions. Most of the jurisdictions told us that they have undocumented or informal plans for the uses of the federal grant monies for emergency preparedness activities. Only two jurisdictions have formal written plans that indicate how the jurisdiction would use its federal homeland security grants. NCR states and local jurisdictions had various budgets for uses of emergency preparedness grant funds they received from fiscal year 2002 through fiscal year 2003. However, they did not coordinate with one another in defining their emergency preparedness needs, in developing their budgets, or in using the federal grant funds to avoid unnecessary duplication of equipment and other resources within the region. In general, budgeting for the use of federal emergency preparedness grants was done on a grant-by-grant basis within each jurisdiction and is largely based on requests from first responder and emergency management officials. Budgets indicate how the individual jurisdictions intend to spend funds from a specific grant but do not indicate whether those budgets are based on any strategic plan or set of priorities. One Maryland county developed an overall plan for the use of federal homeland security and emergency preparedness grants. The July 1, 2003, homeland security strategy outlined the priorities for the county in using federal emergency preparedness grant funds. However, it did not specify grants or amounts for each of the initiatives. The priorities for such funding were focused on equipping and training its first responders; conducting exercises and drills for its government employees; training other essential and critical government workers, as well as the citizens and residents of the county; working vigorously to implement recommendations from its Homeland Security Task Force; and solidifying the county’s relationships with other federal, state, and regional homeland security entities. While officials from other NCR jurisdictions do not have a formal plan, some have established a process for reviewing proposals for the use of the homeland security grants. For example, one Northern Virginia jurisdiction recently adopted a planning process in which its Emergency Management Coordination Committee, composed of the county’s senior management team, solicits budget proposals from first responder and emergency management agencies for potential grant funds. This committee then makes funding recommendations based upon a review of these proposals and their funding priorities for the county. Officials from other jurisdictions described similar processes for developing budget proposals, but they have not developed longer-term or comprehensive strategic plans. To determine how the NCR jurisdictions used the funds, we reviewed the use of funds of the Fiscal Year 2002 Department of Defense Supplemental Appropriation, which was the largest source of funding for the period of our review. Each NCR jurisdiction used those funds to buy emergency equipment for first responders. However, officials said they did not coordinate on planning for these expenditures with the other NCR jurisdictions. For example, five of the eight NCR jurisdictions planned to either purchase or upgrade their command vehicles. One of the jurisdictions allocated $310,000 for a police command bus and $350,000 for a fire and rescue command bus; a neighboring jurisdiction allocated $350,000 for a mobile command unit for its fire department; another jurisdiction allocated $500,000 for a police command vehicle replacement; a nearby jurisdiction allocated $149,000 to upgrade its incident command vehicle; and its neighboring jurisdiction allocated $200,000 to modify and upgrade its mobile command van. In another example, four nearby jurisdictions allocated grant funds on hazardous response vehicles or hazardous materials supplies that reflected costs of $155,289 for one jurisdiction’s rapid hazmat unit, $355,000 for a neighboring jurisdiction’s hazardous materials response vehicle, $550,000 for a jurisdiction’s fire and rescue hazmat unit vehicle, and $115,246 for a jurisdiction’s hazardous materials supplies. While such purchases might not be duplicative, discussions among neighboring jurisdictions could have facilitated a plan and determined whether these purchases were necessary or whether the equipment purchased could be shared among the jurisdictions, thereby freeing up grant dollars for other needed, equipment to create greater combined capacity within the region. Maximizing the use of resources entails avoiding unnecessary duplication wherever possible. This requires some discussion and general agreement on priorities, roles, and responsibilities among the jurisdictions. Some NCR and ONCRC officials said they believed the NCR jurisdictions could plan better to share resources and work to prevent redundancy while avoiding gaps in inventory. During our review, NCR jurisdictions and federal grantor agencies could not consistently provide data on the 16 grants and funding sources within the scope of our study, such as award amounts, budgets, and financial records. The individual jurisdictions and ONCRC did not have systems in place to identify and account for all federal grants that can be used to enhance domestic preparedness in NCR and elsewhere. The lack of consistently available budget data for all emergency preparedness and homeland security grants limits the ability to analyze and assess the impact of federal funding and to make management decisions to ensure the effective use of federal grant dollars. There is no central source within each jurisdiction or at the federal level to identify all of the emergency preparedness grants that have been allocated to NCR. At the local level, such information is needed to meet legislative and regulatory reporting requirements for federal grant expenditures of $300,000 or more. In addition, each grant has specific reporting requirements, such as quarterly financial status reports, semiannual program progress reports, and related performance information to comply with the Government Performance and Results Act (P.L. 103-62). Moreover, federal grant financial system guidelines require that federal agencies implement systems that include complete, accurate, and prompt generation and maintenance of financial records and transactions. Those federal system requirements also require timely and efficient access to complete and accurate information, without extraneous material, to internal and external parties that require that information. We asked ONCRC, the Virginia and Maryland emergency management agencies, and the eight NCR jurisdictions for data on the emergency preparedness grants allocated in fiscal years 2002 and 2003. ONCRC could not provide a complete list of grants allocated to the NCR as a whole, and the state emergency management agencies did not provide complete lists of grants for NCR jurisdictions within their respective states. For example, the Maryland Emergency Management Agency (MEMA) provided data on the federal grants for Montgomery and Prince George’s counties that were allocated through the state. MEMA is not required to oversee grants not allocated through the state and, therefore, it did not provide grant data on all of the federal grants provided to the two counties. Similarly, the Virginia Department of Emergency Management (VDEM) did not provide data on all of the grants to the jurisdictions in Virginia. We compiled grant data for the NCR jurisdictions by combining information received from the NCR jurisdictions and the state emergency management agencies. This involved contacting several different budget officials at the NCR jurisdictions and at the state level. The availability of emergency preparedness grant data at the local level also varied by NCR jurisdiction, and complete data were not readily available. After repeated requests for the grant awards, budgets, and plans over a period of 7 months, NCR jurisdictions or the State emergency management agencies provided us with the grant amounts awarded to them during fiscal years 2002 and 2003. Some jurisdictions provided documentation on amounts awarded, but did not provide supporting budget detail for individual grants to substantiate the amounts awarded. Regarding budgets, we obtained a range of information from the NCR jurisdictions. Some jurisdictions provided budget documentation on all the federal grants that were allocated to them; others provided budget documentation on some of their grants; and two did not provide any grant budget documentation. This lack of supporting documentation indicates a lack of financial controls that should be in place to provide accurate and timely data on federal grants. Guidance on financial management practices notes that to effectively evaluate government programs and spending, Congress and other decision makers must have timely, accurate, and reliable financial information on program cost and performance. Moreover, the Comptroller General’s standards for internal control state that “program managers need both operational and financial data to determine whether they are meeting their agencies’ strategic and annual performance plans and meeting their goals for accountability for effective and efficient use of resources.” These standards stress the importance of this information to make operating decisions, monitor performance, and allocate resources and that “pertinent information is identified, captured, and distributed to the right people in sufficient detail, in the right form, and at the appropriate time to enable them to carry out their duties and responsibilities efficiently and effectively.” Having this information could help NCR officials make informed decisions about the use of grant funds in a timely manner. Without national standards, guidance on likely scenarios for which to be prepared, plans, and reliable data, NCR officials assess their gaps in preparedness based on their own judgment. The lack of standards and consistently available data makes it difficult for the NCR officials to use the results of DHS’s ODP assessment to identify the most critical gaps in capacities and to verify the results of the assessment and establish a baseline that could then be used to develop plans to address outstanding needs. Consequently, it is difficult for us or ONCRC to determine what gaps, if any, remain in the emergency response capacities and preparedness within the NCR. Each jurisdiction provided us with information on their perceived gaps and specific needs for improving emergency preparedness. However, there is no consistent method for identifying these gaps among jurisdictions within NCR. Some officials from NCR jurisdictions said that in the absence of a set of national standards, they use the standards and accreditation guidelines for disciplines such as police, fire, hazardous materials, and emergency management in assessing their individual needs. While these standards may provide some general guidance, some NCR officials said that they need more specific guidance from DHS, including information about threats, guidance on how to set priorities, and standards. Some of the jurisdictions reported that they have conducted their own assessments of need based on their knowledge of threat and risk. Officials from other jurisdictions said they have used FEMA’s Local Capability Assessment for Readiness or the hazardous materials assessment to identify areas for improvement. Several jurisdictions told us that they identify remaining gaps based on requests from emergency responder agencies. Other jurisdictions said that they have established emergency management councils or task forces to review their preparedness needs and begin to develop a more strategic plan for funding those needs. Officials of most NCR jurisdictions commonly identified the need for more comprehensive and redundant communications systems and upgraded emergency operations centers. Some officials of NCR jurisdictions also expressed an interest in training exercises for the region as a whole to practice joint response among the Maryland and Virginia jurisdictions and the District of Columbia. DHS and ONCRC appear to have played a limited role in fostering a coordinated approach to the use of federal domestic preparedness funds in NCR. According to the former ONCRC Director, ONCRC has focused its initial coordination efforts on the development of a strategy for the use of the UASI funds of $60.5 million in NCR. However, ONCRC efforts to date have not addressed about $279.5 million in other federal domestic preparedness funding that we reviewed. According to officials from one NCR jurisdiction, they would like additional support and guidance from DHS on setting priorities for the use of federal funds. One of ONCRC’s primary responsibilities is to oversee and coordinate federal programs and domestic preparedness initiatives for state, local, and regional authorities in NCR and to cooperate with and integrate the efforts of elected officials of NCR. ONCRC established a governance structure to receive input from state and local authorities through a Senior Policy Group composed of representatives designated by the Governors of Maryland and Virginia and the Mayor of Washington, D.C. The Senior Policy Group developed the UASI strategy to fund a range of projects that would enhance regional capabilities to improve preparedness and reduce the vulnerability of NCR to terrorist attacks. (See table 5.) According to ONCRC’s former Director, the strategy for UASI was an attempt to force a new paradigm, by developing a regional plan for the use of the funds, with input from outside organizations in addition to representatives from the local jurisdictions. The plan for the $60.5 million allocated funds for projects, including planning, training, equipment, and exercises to benefit the region as a whole, as opposed to allocating funds to meet the individual needs of each NCR jurisdiction separately. The former Director said that funding allocations to these regional projects were based on a summary of the results of the assessment that was completed by each NCR jurisdiction. Officials from NCR state and local jurisdictions expressed mixed opinions on the effectiveness of ONCRC. Officials from a Virginia jurisdiction expressed a need for more guidance on how to set priorities and allocate federal domestic preparedness funding. District of Columbia officials said ONCRC has done a good job of coordination and has been very supportive, given its small staff and the newness of the office. Some noted that ONCRC’s role is still evolving. For example, some officials in one jurisdiction said that ONCRC’s long-term mission has not yet been finalized and ONCRC is still in the process of establishing its role within NCR. The officials believe that ONCRC has significant potential for leading and coordinating homeland security efforts in the region. They recommended that ONCRC become a routine part of regional governance and provide guidance to local governments, focus resources, and enhance the ability of localities to work together to implement homeland security strategies. The officials noted that ONCRC’s efforts were motivated primarily by the leadership of the Director and had not become routine. We discussed NCR officials’ views with the former ONCRC Director. He acknowledged that ONCRC’s initial efforts to coordinate the use of federal grant funds in NCR concentrated on implementing UASI. He said that UASI presented an improvement over previous funding allocations in NCR by allocating funds on a regional basis—rather than jurisdictional perceptions—that considered the results of an assessment of NCR preparedness levels and requirements. The Director said that ONCRC could consider coordinating for other federal programs in addition to UASI, but he did not indicate any concrete plans to do so. The nation’s ongoing vulnerability to terrorist attacks after September 11, 2001, is magnified in NCR because it is the location of critical government infrastructure, national and international institutions, and significant landmarks. In addition to NCR, there are several other high- threat urban areas that share similar vulnerabilities, and improving homeland security is a concern for the entire nation. The challenges faced in NCR a lack of performance standards; baseline information on preparedness and threat and risk scenarios, plans based on those tools, and reliable data to report on the status of initiativesare fundamental obstacles in achieving desired levels of preparedness. Furthermore, NCR’s complex structure requires working with individual political jurisdictions with varying experience in managing homeland security funds and responding to emergencies. This adds to the challenge of developing and implementing a coordinated plan for enhancing first responder capacity. Effective regional and local management of the large amounts of available homeland security funding is an important element in improving our national preparedness. However, it is difficult for regional coordinators and local jurisdictions to avoid duplication and inefficiency in the procurement of goods and services without a knowledge of all the grants that can be leveraged to fight the terror threat; without centralized, standard records to account for the use of those grants; and without a coordinated regional plan for using those funds. It is also difficult to target funding in a way that ensures it is used for goods and services that enhance preparedness and response without current threat information or scenarios and standards that reflect performance goals for preparedness and response. The approach taken in planning for the use of the UASI funds, with its emphasis on regional allocations, is a step toward improved coordination that could provide a more rational and effective method for enhancing emergency preparedness within NCR. In addition, DHS’s recently released strategic plan and the endorsement of standards for equipment represent steps toward addressing some of the challenges noted in this report. However, more needs to be done to develop plans, monitor the use of funds, and assess against goals and standards to evaluate progress toward improved homeland security. To help ensure that emergency preparedness grants and associated funds are managed in a way that maximizes their effectiveness, we recommend that the Secretary of the Department of Homeland Security take the following three actions in order to fulfill the department’s statutory responsibilities in the NCR: work with the NCR jurisdictions to develop a coordinated strategic plan to establish goals and priorities for enhancing first responder capacities that can be used to guide the use of federal emergency preparedness funds; monitor the plan’s implementation to ensure that funds are used in a way that promotes effective expenditures that are not unnecessarily duplicative; and identify and address gaps in emergency preparedness and evaluate the effectiveness of expenditures in meeting those needs by adapting standards and preparedness guidelines based on likely scenarios for NCR and conducting assessments based on them. On April 29, 2004, we provided a draft of this report to the Secretary of DHS and to ONCRC’s Senior Policy Group for comment. On May 19, 2004, we received comments from DHS’s GAO/OIG Liaison and the Senior Policy Group that are reprinted in appendix III and IV, respectively. DHS and the Senior Policy Group generally agreed with our recommendations but also stated that NCR jurisdictions had worked cooperatively together to identify opportunities for synergies and lay a foundation for meeting the challenges noted in the report. DHS and the Senior Policy Group also agreed that there is a need to continue to improve preparedness by developing more specific and improved preparedness standards, clearer performance goals, and an improved method for tracking regional initiatives. In addition, DHS identified the following concerns: DHS stated that the report demonstrated a fundamental misunderstanding regarding homeland security grant programs in NCR and the oversight role and responsibilities of ONCRC. DHS stated that GAO fails to distinguish between funds provided to specific jurisdictions for local priorities and enhancements and funds intended to address regional needs. We disagree. The responsibilities of ONCRC are outlined in the Homeland Security Act and on page 8 of this report. These include activities such as coordinating with federal, state, and regional agencies and the private sector to ensure adequate planning and execution of domestic preparedness activities among these agencies and entities, and assessing and advocating for resources that state, local, and regional authorities in the NCR need to implement efforts to secure the homeland. The responsibilities further require an annual report to Congress that identifies resources required to implement homeland security efforts in NCR, assesses progress made in implementing these efforts, and makes recommendations regarding additional resources needed. In order to fulfill this mandate, ONCRC needs information on how all grant monies have been used, not just those designated specifically for regional purposes, information on how those expenditures have enhanced first responder capacity in the region, and an ability to coordinate all federal domestic preparedness funding sources to NCR. DHS noted that our report recognizes the importance of a coordinated regionwide plan for establishing first responder goals, needs, and priorities and assessing the benefits of all expenditures to enhance first responder capabilities, and our review found that no such coordination methods have been developed. DHS stated that this task is accomplished by the formal NCR Review and Recommendation Process, adopted on February 4, 2004, which ensures coordination of resources among all jurisdictions within NCR. DHS provided us information on this process at our exit conference on April 15, 2004. DHS explained that the Review and Recommendation Process was developed for the UASI program, and ONCRC and NCR officials are in the process of extending it to additional federal programs. While this process could be used to facilitate the development of a regional plan in the future, the process has not included a review of how federal grants have already been used or the development of a coordinated regional plan for establishing needs and priorities and assessing benefits of all federal domestic preparedness programs. Finally, the comments noted a correction to our draft regarding the establishment of the Senior Policy Group, and we have revised the report accordingly. As agreed with your office, unless you release this report earlier, we will not distribute it until 30 days from the date of this letter. At that time, we will send copies to relevant congressional committees and subcommittees, to the Secretary of Homeland Security, to members of the NCR Senior Policy Group, and to other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report or wish to discuss it further, please contact me at (202) 512-8777 or Patricia A. Dalton, Director, (202) 512-6737. Key contributors to this report are listed in appendix V. We met with and obtained documentation from officials of the Department of Homeland Security (DHS), the Federal Emergency Management Agency (FEMA), and the Office for Domestic Preparedness; the Metropolitan Washington Council of Governments (WashCOG); the homeland security advisers and officials from the emergency management agencies for the District of Columbia, Maryland, and Virginia; and first responder officials from the National Capital Region (NCR) jurisdictions, including the District of Columbia; the city of Alexandria; and the counties of Arlington, Fairfax, Loudoun, and Prince William in Virginia; and Montgomery and Prince Georges counties in Maryland. To determine what federal funds have been provided to local jurisdictions for emergency preparedness, for what specific purposes, and from what sources, we met with officials from the DHS’s Office for National Capital Region Coordination (ONCRC), ONCRC’s Senior Policy Group, Federal Emergency Management Agency (FEMA), homeland security advisers for the District of Columbia, Maryland, and Virginia, and first responders from eight jurisdictions within NCR—the District of Columbia; the city of Alexandria; and Arlington, Fairfax, Loudoun, Prince William, Montgomery, and Prince George’s counties. We identified 25 emergency preparedness programs that provided funding to NCR, and we selected 16 for our detailed review. These 16 programs were selected to cover a range of programs, including the largest funding sources; grants provided for general purposes such as equipment and training; and grants provided for specific purposes, such as fire prevention and bioterrorism. We obtained and reviewed the emergency preparedness grant data for the period of October 2001 through September 30, 2003, including grant awards, budgets, and detailed plans for purchases, such as equipment and supplies, communications, and training and exercises. To the extent possible, we independently verified the data we received on funds available and the planned and actual use of those funds by comparing federal, state, and local data sources. Our review revealed the lack of consistent data reported by the jurisdictions in the region and the lack of a central source for such data. For example, NCR state and local jurisdictions vary in their ability to provide budget information on the emergency preparedness and homeland security grants they received. Also, DHS and ONCRC do not have systems in place to account for all federal homeland security and emergency preparedness grants covering their respective jurisdictions. To determine the regional coordination practices and remaining challenges to implementing regional preparedness programs in NCR, we met with officials from WashCOG, DHS, Virginia, Maryland, and local NCR jurisdictions. Oral and documentary evidence obtained from these officials has provided us with an overall perspective on the status of coordination for homeland security within the region and remaining challenges to implementing effective homeland security measures in NCR. We also talked with officials about regional programs that have been successfully implemented in NCR. To determine the gaps that exist in emergency preparedness in NCR, we obtained oral and documentary information from officials of the Metropolitan Washington Council of Governments; DHS; the District of Columbia, Maryland, and Virginia emergency management agencies; homeland security advisers; and local first responders. Our discussions with these officials provide their views of the state of preparedness in NCR. We also obtained information from these officials regarding their plans to address those emergency preparedness gaps. In addition, we reviewed relevant reports, studies, and guidelines to provide context for assessing preparedness. However, there are no uniform standards or criteria by which to measure gaps, and self-reported information from local jurisdictions may not be objective. To determine DHS’s role in enhancing the preparedness of NCR through coordinating the use of federal emergency preparedness grants, assessing preparedness, providing guidance, targeting funds to enhance preparedness, and monitoring the use of those funds, we met with DHS, as well as with state homeland security advisers, state emergency management officials, and local first responders. We obtained and analyzed verbal and documentary evidence on the ODP assessment completed by the NCR jurisdictions, and how that assessment was used, as well as other actions DHS had taken to facilitate homeland security coordination within NCR. Finally, we contacted the District of Columbia Auditor, the Maryland Office of Legislative Audits, and the Virginia Joint Legislative Audit and Review Commission to inform them of our review and determine if the agencies had related past or ongoing work. None of the agencies had conducted or planned to conduct reviews of emergency preparedness or homeland security in the NCR. We conducted our review from June 2003 to February 2004 in accordance with generally accepted government auditing standards. NCR jurisdictions over the years have implemented various mechanisms to ensure planned and coordinated interjurisdictional approaches to the activities of first responders and other public safety professionals. These efforts involve the activities of regional planning and coordinating bodies, such as the Metropolitan Washington Council of Governments (WashCOG), the regional metropolitan planning organization, and mutual assistance agreements between the first responders of neighboring NCR jurisdictions. Planning and coordinating bodies have existed in NCR for many years. WashCOG is a regional entity that includes all the jurisdictions within the region. Other planning and coordinating organizations exist in both Maryland and Virginia. WashCOG is a nonprofit association representing local governments in the District of Columbia, suburban Maryland, and Northern Virginia. Founded in 1957, WashCOG is supported by financial contributions from its 19 participating local governments, federal and state grants and contracts, and donations from foundations and the private sector. WashCOG’s members are the governing officials from local NCR governments, plus area delegation members from Maryland and Virginia legislatures, the U. S. Senate, and the House of Representatives. According to WashCOG, the council provides a focus for action and develops regional responses to such issues as the environment, affordable housing, economic development, health and family concerns, human services, population growth, public safety, and transportation. The full membership, acting through its board of directors, sets WashCOG policies. The National Capital Region Preparedness Council is an advisory body that makes policy recommendations to the board of directors and makes procedural and other recommendations to various regional agencies with emergency preparedness responsibilities or operational response authority. The council also oversees the regional emergency coordination plan. Other regional coordinating bodies exist in the National Capital Region, including the Northern Virginia Regional Commission (NVRC), the Maryland Terrorism Forum, and the Maryland Emergency Management Assistance Compact. NVRC is one of the 21 planning district commissions in Virginia. A 42-member board of commissioners composed of elected officials and citizen representatives all appointed by 14 member localities establishes NVRC’s programs and policies. The commission is supported by annual contributions from its member local governments, by appropriations of the Virginia General Assembly, and by grants from federal and state governments and private foundations. According to a NVRC official, the commission established an emergency management council to coordinate programs, funding issues, and equipment needs. The emergency management council is composed of local chief administrative officers, fire chiefs, police chiefs, and public works managers. In 1998, the Governor of Maryland established the Maryland Terrorism Forum to prepare the state to respond to acts of terrorism, especially those involving weapons of mass destruction. The forum also serves as the key means of integrating all services within federal, state, and local entities as well as key private organizations. The forum’s executive committee, composed of agency directors and cabinet members, provides policy guidance and recommendations to the steering committee; which addresses policy concerns. According to Maryland Emergency Management Agency (MEMA) officials, the forum’s first focus was on planning in terms of equipment interoperability; evacuation planning; and commonality of standards, procedures, and vocabulary. The forum is in the process of hiring a full-time planner for preparedness assessment and strategic planning for the region. The terrorist attacks in New York City and on the Pentagon on September 11, 2001, security preparations during the World Bank demonstrations, and the sniper incidents in the summer and fall of 2002 highlighted the need for enhanced mutual cooperation and aid in responding to emergencies. Several NCR jurisdiction public safety officials told us that mutual aid agreements have worked well and are examples of regional programs that have been successfully implemented in NCR. Mutual aid agreements provide a structure for assistance and for sharing resources among jurisdictions in preparing for and responding to emergencies and disasters. Because individual jurisdictions may not have all the resources they need to acquire equipment and respond to all types of emergencies and disasters, these agreements allow for resources to be regionally distributed and quickly deployed. These agreements provide opportunities for state and local governments to share services, personnel, supplies, and equipment. Mutual aid agreements can be both formal and informal and provide cooperative planning, training, and exercises in preparation for emergencies and disasters. For over 40 years, jurisdictions in the National Capital Region have been supporting one another through mutual aid agreements. According to a WashCOG official, the agency has brokered and facilitated most of these agreements and acts as an informal secretariat for mutual aid issues. According to WashCOG, there are currently 21 mutual aid agreements in force among one or more of the 18 member jurisdictions, covering one or more issues. These can be as broad as a police services support agreement among 12 jurisdictions and as restricted as a two-party agreement relating to control over the Woodrow Wilson Bridge. In September 2001, for example, WashCOG member jurisdictions developed planning guidance for health system response to a bioterrorism event in NCR. The purpose of this guidance is to strengthen the health care response systems allowing them to, among other things, improve early recognition and provide mass care. According to WashCOG, the planning guidance was developed through the cooperative effort of more than 225 individuals representing key government and private elements with NCR that would likely be involved should such an event occur. The Maryland Emergency Management Assistance Compact is a mutual aid compact established to help Maryland’s local jurisdictions support one another with their resources during emergencies and disasters and facilitate efficient operational procedures. The compact establishes partnerships among local jurisdictions so that resources can be requested and provided in response to emergencies and disasters. In addition to helping local governments and their emergency response agencies develop risk management decisions, the compact provides a framework that will increase accessibility for maximum compensation in federally declared disasters. The compact, established by legislation in June 2002, is modeled after the Emergency Management Assistance Compact with 48 states and two U.S. territories participating in interstate mutual aid. In addition to those mentioned above, Ernie Hazera and Amelia Shachoy (Strategic Issues) and Wendy Johnson, Jack Bagnulo, David Brown, R. Rochelle Burns (Homeland Security and Justice) made key contributions to this report. | Since the tragic events of September 11, 2001, the National Capital Region (NCR), comprising jurisdictions including the District of Columbia and surrounding jurisdictions in Maryland and Virginia, has been recognized as a significant potential target for terrorism. GAO was asked to report on (1) what federal funds have been allocated to NCR jurisdictions for emergency preparedness; (2) what challenges exist within NCR to organizing and implementing efficient and effective regional preparedness programs; (3) what gaps, if any, remain in the emergency preparedness of NCR; and (4) what has been the role of the Department of Homeland Security (DHS) in NCR to date. In fiscal years 2002 and 2003, grant programs administered by the Departments of Homeland Security, Health and Human Services, and Justice awarded about $340 million to eight NCR jurisdictions to enhance emergency preparedness. Of this total, the Office for National Capital Region Coordination (ONCRC) targeted all of the $60.5 million Urban Area Security Initiative funds for projects designed to benefit NCR as a whole. However, there was no coordinated regionwide plan for spending the remaining funds (about $279.5 million). Local jurisdictions determined the spending priorities for these funds and reported using them for emergency communications and personal protective equipment and other purchases. NCR faces several challenges in organizing and implementing efficient and effective regional preparedness programs, including the lack of a coordinated strategic plan for enhancing NCR preparedness, performance standards, and a reliable, central source of data on funds available and the purposes for which they were spent. Without these basic elements, it is difficult to assess first responder capacities, identify first responder funding priorities for NCR, and evaluate the effectiveness of the use of federal funds in enhancing first responder capacities and preparedness in a way that maximizes their effectiveness in improving homeland security. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Biomonitoring—one technique for assessing people’s exposure to chemicals—involves measuring the concentration of chemicals or their by- products in human specimens, such as blood or urine. While, biomonitoring has been used to monitor chemical exposures for decades, more recently, advances in analytic methods have allowed scientists to measure more chemicals, in smaller concentrations, using smaller samples of blood or urine. As a result, biomonitoring has become more widely used for a variety of applications, including public health research and measuring the impact of certain environmental regulations, such as the decline in blood lead levels following declining levels of gasoline lead. CDC conducts the most comprehensive biomonitoring program in the country under its National Biomonitoring Program and published the first, second, third and fourth National Report on Human Exposure to Environmental Chemicals—in 2001, 2003, 2005, and 2009, respectively— which reported the concentrations of certain chemicals or their by- products in the blood or urine of a representative sample of the U.S. population. For each of these reports, the CDC has increased the number of chemicals studied—from 27 in the first report, to 116 in the second, to 148 in the third, and to 212 in the fourth. Each report is cumulative (containing all the results from previous reports). These reports provide the most comprehensive assessment to date of the exposure of the U.S. population to chemicals in our environment including such chemicals as acrylamide, arsenic, BPA, triclosan, and perchlorate. These reports have provided a window into the U.S. population’s exposure to chemicals, and the CDC continues to develop new methods for collecting data on additional chemical exposures with each report. For decades, government regulators have used risk assessment to understand the health implications of commercial chemicals. Researchers use this process to estimate how much harm, if any, can be expected from exposure to a given contaminant or mixture of contaminants and to help regulators determine whether the risk is significant enough to require banning or regulating the chemical or other corrective action. Biomonitoring research is difficult to integrate into this risk assessment process, since estimates of human exposure to chemicals have historically been based on the concentration of these chemicals in environmental media and on information about how people are exposed. Biomonitoring data, however, provide a measure of internal dose that is the result of exposure to all environmental media and depend on how the human body processes and excretes the chemical. EPA has made limited use of biomonitoring data in its assessments of risks posed by chemicals. As we previously reported, one major reason for the agency’s limited use of such data is that, to date, there are no biomonitoring data for most commercial chemicals. The most comprehensive biomonitoring effort providing data relevant to the entire U.S. population includes only 212 chemicals, whereas EPA is currently focusing its chemical assessment and management efforts on the more than 6,000 chemicals that companies produce in quantities of more than 25,000 pounds per year at one site. Current biomonitoring efforts also provide little information on children. Large-scale biomonitoring studies generally omit children because it is difficult to collect biomonitoring data from them. For example, some parents are concerned about the invasiveness of taking blood samples from their children, and certain other fluids, such as umbilical cord blood or breast milk, are available only in small quantities and only at certain times. Thus, when samples are available from children, they may not be large enough to analyze. A second reason we reported for the agency’s limited use of biomonitoring data is that EPA often lacks the additional information needed to make biomonitoring studies useful in its risk assessment process. In this regard, biomonitoring provides information only on the level of a chemical in a person’s body but not the health impact. The detectable presence of a chemical in a person’s blood or urine does not necessarily mean that the chemical causes harm. While exposure to larger amounts of a chemical may cause an adverse health impact, a smaller amount may be of no health consequence. In addition, biomonitoring data alone do not indicate the source, route, or timing of the exposure, making it difficult to identify the appropriate risk management strategies. For most of the chemicals studied under current biomonitoring programs, more data on chemical effects are needed to understand whether the levels measured in people pose a health concern, but EPA’s ability to require chemical companies to develop such data is limited. As a result, EPA has made few changes to its chemical risk assessments or safeguards in response to the recent proliferation of biomonitoring data. For most chemicals, EPA would need additional data on the following to incorporate biomonitoring into risk assessment: health effects; the sources, routes, and timing of exposure; and the fate of a chemical in the human body. However, as we have discussed in prior reports, EPA will face difficulty in using its authorities under TSCA to require chemical companies to develop health and safety information on the chemicals. In January 2009, we added transforming EPA’s process for assessing and controlling toxic chemicals to our list of high-risk areas warranting attention by Congress and the executive branch. Subsequently, the EPA Administrator set forth goals for updated legislation that would give EPA the mechanisms and authorities to promptly assess and regulate chemicals. EPA has used some biomonitoring data in chemical risk assessment and management, but only when additional studies have provided insight on the health implications of the biomonitoring data. For example, EPA was able to use biomonitoring data on methylmercury—a neurotoxin that accumulates in fish—because studies have drawn a link between the level of this toxin in human blood and adverse neurological effects in children. EPA also used both biomonitoring and traditional risk assessment information to take action on certain perfluorinated chemicals. These chemicals are used in the manufacture of consumer and industrial products, including nonstick cookware coatings; waterproof clothing; and oil-, stain-, and grease-resistant surface treatments. EPA has several biomonitoring research projects under way, but the agency has no system in place to track progress or assess the resources needed specifically for biomonitoring research. For example, EPA awarded grants that are intended to advance the knowledge of children’s exposure to pesticides through the use of biomonitoring and of the potential adverse effects of these exposures. The grants issued went to projects that, among other things, investigated the development of less invasive biomarker than blood samples—such as analyses of saliva or hair samples—to measures of early brain development. Furthermore, EPA has studied the presence of an herbicide in 135 homes with preschool-age children by analyzing soil, air, carpet, dust, food, and urine as well as samples taken from subject’s hands. The study shed important light on how best to collect urine samples that reflect external dose of the herbicide and how to develop models that simulate how the body processes specific chemicals. Nonetheless, EPA does not separately track spending or staff time devoted to biomonitoring research. Instead, it places individual biomonitoring research projects within its larger Human Health Research Strategy. While this strategy includes some goals relevant to biomonitoring, EPA has not systematically identified and prioritized the data gaps that prevent it from using biomonitoring data. Nor has it systematically identified the resources needed to reach biomonitoring research goals or the chemicals that need the most additional biomonitoring-related research. Also, EPA has not coordinated its biomonitoring research with that of the many agencies and other groups involved in biomonitoring research, which could impair its ability to address the significant data gaps in this field of research. In addition to the CDC and EPA, several other federal agencies have been involved in biomonitoring research, including the U.S. Department of Health and Human Service’s Agency for Toxic Substances and Disease Registry, entities within the U.S. Department of Health and Human Service’s NIH, and the U.S. Department of Labor’s Occupational Safety and Health Administration. Several states have also initiated biomonitoring programs to examine state and local health concerns, such as arsenic in local water supplies or populations with high fish consumption that may increase mercury exposure. Furthermore, some chemical companies have for decades monitored their workforce for chemical exposure, and chemical industry associations have funded biomonitoring research. Finally, some environmental organizations have conducted biomonitoring studies of small groups of adults and children, including one study on infants. As we previously reported, a national biomonitoring research plan could help better coordinate research and link data needs with collection efforts. EPA has suggested chemicals for future inclusion in the CDC’s National Biomonitoring Program but has not gone any further toward formulating an overall strategy to address data gaps and ensure the progress of biomonitoring research. We have previously noted that to begin addressing the need for biomonitoring research, federal agencies will need to strategically coordinate their efforts and leverage their limited resources. Similarly, the National Academies of Science found that the lack of a coordinated research strategy allowed widespread exposures to go undetected, including exposure to flame retardants known as polybrominated diphenyl ethers—chemicals which may cause liver damage, among other things, according to some toxicological studies. The academy noted that a coordinated research strategy would require input from various agencies involved in biomonitoring and supporting disciplines. In addition to EPA, these agencies include the CDC, NIH, the Food and Drug Administration, and the U.S. Department of Agriculture. Such coordination could strengthen efforts to identify and possibly regulate the sources of the exposure detected by biomonitoring, since the most common sources—that is, food, environmental contamination, and consumer products—are under the jurisdiction of different agencies. We have recommended that EPA develop a comprehensive research strategy to improve its ability to use biomonitoring in its risk assessments. However, though EPA agreed with our recommendation, th agency still lacks such a comprehensive strategy to guide its own research efforts. In addition, we recommended that EPA establish an interagency e task force that would coordinate federal biomonitoring research effor across agencies and leverage available resources. If EPA determines that further authority is necessary, we stated that it should request that the Executive Office of the President establish an interagency task force to coordinate such efforts. Nonetheless, EPA has not established such an interagency task force to coordinate federal biomonitoring research, nor has it informed us that it has requested the Executive Office of the President do so. EPA has not determined the extent of its authority to obtain biomonitoring data under TSCA, and this authority is generally untested and may be limited. Several provisions of TSCA are potentially relevant. For example, under section 4 of TSCA EPA can require chemical companies to test chemicals for their effects on health or the environment. However, biomonitoring data indicate only the presence of a chemical in a person’s body and not its impact on the person’s health. EPA told us that biomonitoring data may demonstrate chemical characteristics that would be relevant to a chemical’s effects on health or the environment and that the agency could theoretically require that biomonitoring be used as a methodology for developing such data. EPA’s specific authority to obtain biomonitoring data in this way is untested, however, and EPA is only generally authorized to require the development of such data after meeting certain threshold risk requirements that are difficult, expensive, and time- consuming. EPA may also be able to indirectly require the development of biomonitoring data using the leverage it has under section 5(e) of TSCA, though it has not yet attempted to do so. Under certain circumstances, EPA can use this section to seek an injunction to limit or prohibit the manufacture of a chemical. As an alternative, EPA sometimes issues a consent order that subjects manufacture to certain conditions, including testing, which could include biomonitoring. While EPA may not be explicitly authorized to require the development of such test data under this section, chemical companies have an incentive to provide the requested test data to avoid a more sweeping ban on a chemical’s manufacture. EPA has not indicated whether it will use section 5(e) consent orders to require companies to submit biomonitoring data. Other TSCA provisions allow EPA to collect existing information on chemicals that a company already has, knows about, or could reasonably ascertain. For example, section 8(e) requires chemical companies to report to EPA any information they have obtained that reasonably supports the conclusion that a chemical presents a substantial risk of injury to health or the environment. EPA asserts that biomonitoring data are reportable as demonstrating a substantial risk if the chemical in question is known to have serious toxic effects and the biomonitoring data indicate a level of exposure previously unknown to EPA. Industry has asked for more guidance on this point, but EPA has not yet revised its guidance. Confusion over the scope of EPA’s authority to collect biomonitoring data under section 8 (e) is highlighted by the history leading up to an EPA action against the chemical company E. I. du Pont de Nemours and Company (DuPont). Until 2000, DuPont used the chemical PFOA to make Teflon®. In 1981, DuPont took blood from several female workers and two of their babies. The levels of PFOA in the babies’ blood showed that PFOA had crossed the placental barrier. DuPont also tested the blood of twelve community members, 11 of whom had elevated levels of PFOA in their blood. DuPont did not report either set of results to EPA. After EPA received the results from a third party, DuPont argued that the information was not reportable under TSCA because the mere presence of PFOA in blood did not itself support the conclusion that exposure to PFOA posed any health risks. EPA subsequently filed two actions against DuPont for violating section 8(e) of TSCA by failing to report the biomonitoring data, among other claims. DuPont settled the claims but did not admit that it should have reported the data. However, based on the data it had received, EPA conducted a subsequent risk assessment, which contributed to a finding that PFOA was “likely to be carcinogenic to humans.” In turn, this finding contributed to an agreement by DuPont and others to phase out the use of PFOA by 2015. However, EPA’s authority to obtain biomonitoring data under section 8(e) of TSCA remains untested in court. Given the uncertainties regarding TSCA authorities, we have recommended that EPA should determine the extent of its legal authority to require companies to develop and submit biomonitoring data under TSCA. We also recommended that EPA request additional authority from Congress if it determines that such authority is necessary. If EPA determines that no further authority is necessary, we recommended that it develop formal written policies explaining the circumstances under which companies are required to submit biomonitoring data. However, EPA has not yet attempted a comprehensive review of its authority to require the companies to develop and submit biomonitoring data. The agency did not disagree with our recommendation, but commented that a case-by-case explanation of its authority might be more useful than a global assessment. However, we continue to believe that an analysis of EPA’s legal authority to obtain biomonitoring data is critical. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of this Subcommittee may have. For further information about this testimony, please contact John B. Stephenson at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Contributors to this testimony include David Bennett, Antoinette Capaccio, Ed Kratzer, and Ben Shouse. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Biomonitoring, which measures chemicals in people's tissues or body fluids, has shown that the U.S. population is widely exposed to chemicals used in everyday products. Some of these have the potential to cause cancer or birth defects. Moreover, children may be more vulnerable to harm from these chemicals than adults. The Environmental Protection Agency (EPA) is authorized under the Toxic Substances Control Act (TSCA) to control chemicals that pose unreasonable health risks. One crucial tool in this process is chemical risk assessment, which involves determining the extent to which populations will be exposed to a chemical and assessing how this exposure affects human health This testimony, based on GAO's prior work, reviews the (1) extent to which EPA incorporates information from biomonitoring studies into its assessments of chemicals, (2) steps that EPA has taken to improve the usefulness of biomonitoring data, and (3) extent to which EPA has the authority under TSCA to require chemical companies to develop and submit biomonitoring data to EPA. EPA has made limited use of biomonitoring data in its assessments of risks posed by commercial chemicals. One reason is that biomonitoring data relevant to the entire U.S. population exist for only 212 chemicals. In addition, biomonitoring data alone indicate only that a person was somehow exposed to a chemical, not the source of the exposure or its effect on the person's health. For most of the chemicals studied under current biomonitoring programs, more data on chemical effects are needed to understand if the levels measured in people pose a health concern, but EPA's authorities to require chemical companies to develop such data is limited. However, in September 2009, the EPA Administrator set forth goals for updated legislation to give EPA additional authorities to obtain data on chemicals. While EPA has initiated several research programs to make biomonitoring more useful to its risk assessment process, it has not developed a comprehensive strategy for this research that takes into account its own research efforts and those of the multiple federal agencies and other organizations involved in biomonitoring research. EPA does have several important biomonitoring research efforts, including research into the relationships between exposure to harmful chemicals, the resulting concentration of those chemicals in human tissue, and the corresponding health effects. However, without a plan to coordinate its research efforts, EPA has no means to track progress or assess the resources needed specifically for biomonitoring research. Furthermore, according to the National Academy of Sciences, the lack of a coordinated national research strategy has allowed widespread chemical exposures to go undetected, such as exposures to flame retardants. While EPA agreed with GAO's recommendation that EPA develop a comprehensive research strategy, the agency has not yet done so. EPA has not determined the extent of its authority to obtain biomonitoring data under TSCA, and this authority is untested and may be limited. The TSCA section that authorizes EPA to require companies to develop data focuses on health and environmental effects of chemicals. However, biomonitoring data indicate only the presence of a chemical in the body, not its impact on health. It may be easier for EPA to obtain biomonitoring data under other TSCA sections, which allow EPA to collect existing information on chemicals. For example, TSCA obligates chemical companies to report information that reasonably supports the conclusion that a chemical presents a substantial risk of injury to health or the environment. EPA asserts that biomonitoring data are reportable if a chemical is known to have serious toxic effects and biomonitoring data indicates a level of exposure previously unknown to EPA. EPA took action against a chemical company under this authority in 2004. However, the action was settled without an admission of liability by the company, so EPA's authority to obtain biomonitoring data remains untested. GAO's 2009 report recommended that EPA clarify this authority, but it has not yet done so. The agency did not disagree, but commented that a case-by-case explanation of its authority might be more useful than a global assessment. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Suspensions and debarments apply governmentwide—one agency’s suspension or debarment decision precludes all other agencies from doing business with an excluded party. Suspensions and debarments may be either statutory or administrative. Statutory debarments, also referred to as declarations of ineligibility, are based on violation of law, such as statutory requirements to pay minimum wages. Administrative debarments are based on the causes specified in the FAR, including commission of offenses such as fraud, theft, bribery, or tax evasion. In 1988, the Nonprocurement Common Rule (NCR) was implemented to provide a parallel process to the FAR for suspending and debarring parties from receiving federal grants, loans, and other nonprocurement transactions. The FAR and NCR provide for reciprocity with each other— that is, any exclusion under the FAR shall be recognized under NCR, and any exclusion under NCR shall be recognized under the FAR. Exclusions of companies or individuals from federal contracts (procurements) or other federal funding such as grants (nonprocurements), as well as declarations of ineligibility, are listed in EPLS, a Web-based system maintained by GSA. EPLS also includes an archive of expired exclusions. Agencies are required to report all excluded parties by entering data directly into the database within 5 working days after the exclusion becomes effective. The FAR includes a list of the information to be included in EPLS, such as the contractor’s name and address, contractor identification number, the cause of the action, the period of the exclusion, and the name of the agency taking the action. From January 1995 to November 2004, the number of exclusion actions taken each year by all agencies governmentwide has ranged from about 3,400 in 1995 to almost 7,000 in 2002, with an average of 5,700 actions taken annually (see fig.1). In November 2004, the number of current exclusions governmentwide totaled about 32,500, about 3,500 of which were the result of statutory debarments. Of this governmentwide total, EPLS showed that the 6 agencies we reviewed had excluded about 2,400 parties, 617 of which were the result of statutory debarments by EPA, based on violations of the Clean Water and Clean Air Acts (see fig. 2). For exclusion actions taken each year by the six selected agencies from 1995 to 2004, see appendix III. In 1987, we reported that the suspension and debarment regulations and procedures generally provided an effective tool for protecting the government against doing business with fraudulent, unethical, or nonperforming contractors. We noted, however, that there was a need for timely access to a governmentwide list of excluded parties. We also identified areas for improvement in the process and recommended amendments to the FAR. The following recommendations have been implemented: (1) that governmentwide exclusions be extended to contractors proposed for debarment; (2) that the definition of affiliate, i.e., related firms or those under common control, include a description of indicators of control, such as common management or ownership; (3) that suspended and debarred contractors also be excluded from subcontracting under government contracts; and (4) that the extent to which orders placed under certain contractual arrangements—such as multiple awards schedules, basic ordering agreements, and indefinite quantity contracts—are covered by exclusions be clarified. The FAR prescribes general policies governing the circumstances under which contractors may be excluded from federal contracting, requires agencies to establish a process for determining exclusions, and allows agencies the flexibility to supplement the FAR to implement the process. The supplements to the FAR and additional guidance developed by 24 agencies generally designate internal responsibilities for suspension and debarment procedures and intra-agency coordination. As an alternative to exclusion, agencies sometimes enter into administrative agreements with contractors with whom they believe there is a continuing need to do business. These agreements can encourage changes in business practices designed to promote contractor responsibility. In limited circumstances, an agency may continue to do business with excluded contractors. The FAR requires federal agencies to conduct business only with responsible contractors and prescribes overall suspension and debarment policies. A suspension may be imposed only when an agency determines that immediate action is necessary to protect the government’s interests. To initiate a suspension, an agency must have adequate evidence that the party has committed certain civil or criminal offenses or that there is another compelling cause affecting the contractor’s present responsibility. Generally, legal proceedings must begin within 12 months or the suspension terminates. To initiate a debarment, an agency must have evidence of conviction or civil judgment for certain offenses, a preponderance of evidence that the party has committed certain offenses, such as serious failure to perform to the terms of a contract, or any other cause of so serious or compelling a nature that it affects the contractor’s present responsibility. The agency debarring official is responsible for determining whether debarment is in the government’s interest, and the FAR states that the seriousness of the contractor’s actions and any remedial measures or mitigating factors should be considered. Generally, the period of debarment should not exceed 3 years. Figure 3 provides a general overview of the suspension and debarment process. The FAR allows agencies flexibility to supplement FAR provisions and develop guidance based on agency needs. The 24 agencies we reviewed had included suspension and debarment policies in FAR supplements; 21 had also adopted NCR; and 12 had developed additional guidance, such as directives and policy memos to implement their suspension and debarment processes (see table 1). The additional guidance generally designates responsibilities for suspension and debarment procedures and addresses intra-agency coordination. Each of the six agencies we reviewed in depth—the Air Force, Army, Navy, Defense Logistics Agency, EPA, and GSA—has included suspension and debarment policies in FAR supplements, adopted NCR, and developed guidance for implementing suspension and debarment procedures: The Defense Federal Acquisition Regulation Supplement (DFARS) designates suspension and debarment officials in the various DOD organizations—including the Air Force, Army, Navy, and Defense Logistics Agency—and a process for waiving contractor exclusions for compelling reasons. In addition, in September 1992, the Under Secretary of Defense for Acquisition issued guidance stating that (1) when appropriate, before action is taken on suspension, a contractor should be informed that DOD has extremely serious concerns with the contractor’s conduct, and the contractor should be allowed to provide information on its behalf, and (2) DOD debarring officials should coordinate fully within DOD, and in certain cases among civilian agencies, to determine the possible effects of the suspensions and debarments on other organizations as well as to receive additional information that may affect the exclusion decision. EPA’s Acquisition Regulation, a FAR supplement, designates the roles of various officials and clarifies EPA’s suspension and debarment procedures. An August 1993 memorandum of understanding provides specific responsibilities for EPA’s Office of Acquisition Management and Office of Grants and Debarment in the processing of suspension and debarment actions. In addition, EPA has established guidance on initiating a suspension or debarment action. EPA also included a specific section in NCR addressing EPA’s statutory disqualifications under the Clean Air and Clean Water Acts. GSA also supplemented the FAR with a regulation that designates the roles of various officials and clarifies suspension and debarment procedures. The GSA Acquisition Manual contains similar language to the FAR supplement. In addition, GSA’s Office of Inspector General Operations Manual outlines responsibilities for investigating cases, coordinating with law enforcement agencies, and making referrals to GSA’s suspension and debarment officials. In November 2002, GSA issued an internal order concerning the requirement for legal review of suspension and debarment decisions. Each of the agencies we reviewed established an organizational structure that identifies the lead office, responsibilities, and staffing to manage their suspension and debarment activities. (See app. IV for a summary of each agency’s suspension and debarment organizational structure.) Table 2 shows specific actions reported by the six agencies we reviewed during fiscal year 2004. Administrative agreements, also referred to as compliance agreements, provide an alternative to exclusion when contractors that are being considered for suspension or debarment have addressed the cause of the problem through actions such as disciplining individuals, revising internal controls, and disclosing problems to the appropriate government agency in a timely manner. Under administrative agreements, contractors agree to meet certain requirements and may continue to enter into contracts with the government. Agency officials said that reaching administrative agreements with contractors can serve the government’s interest by improving contractor responsibility, ensuring compliance through monitoring the requirements of the agreement, and maintaining competition among contractors. Administrative agreements can be negotiated at any point in the suspension and debarment process, such as when a contractor independently acknowledges a problem, but the agencies we reviewed in depth said these agreements are most commonly negotiated as an alternative to debarment. These agreements generally follow a consistent format, emphasize corporate ethics programs, and are in effect for a period of 3 years. Table 3 summarizes the key contractor requirements included in the agreements we reviewed. While administrative agreements provide an alternative to exclusion, agencies can continue to do business with excluded contractors in limited circumstances through the use of waivers by making a determination that there is a compelling reason to award a contract to an excluded party. This determination requires a written explanation of the reason for doing business with an excluded contractor, such as an urgent need for the contractor’s supplies or services, or that the contractor is the only known source. Of the six agencies we reviewed, only the Air Force and the Army reported that compelling reason waivers had been issued over the past 2 years. The Air Force reported that three waivers had been granted—in August and September 2003 and in August 2004—to continue contracting with the Boeing Company for launch services for military space equipment based on national security concerns and to mitigate program schedule and cost risks. In fiscal year 2004, the Air Force issued one waiver for sole- source reasons, and the Army issued four waivers based on urgent need. Suspension and debarment constitutes exclusion of all divisions or other organizational elements of the contractor, unless the exclusion decision is otherwise limited. Exclusions may extend to affiliates, if named in the suspension or debarment notice and decision. Organizational entities of excluded contractors that can demonstrate independence may be allowed to receive government contracts. The information in EPLS may be insufficient to enable contracting officers to determine with confidence that a prospective contractor is not currently suspended, debarred, or proposed for debarment. Further, information on administrative agreements and compelling reason waivers is not routinely shared among agencies or captured centrally in a database such as EPLS. The Interagency Suspension and Debarment Committee (ISDC), which monitors the suspension and debarment system, provides a useful forum for sharing information among suspension and debarment officials. The FAR requires agencies to enter various information on contractors into EPLS, including contractors’ and grantees’ Data Universal Numbering System (DUNS) number—a unique nine-digit identification number assigned by Dun & Bradstreet, Inc. to identify unique business entities. We found, however, that while the EPLS database has a field for entering contractors’ DUNS numbers, it is not a required field in the database, and the data appear to be routinely omitted from the database. For the 6 agencies we reviewed in depth, about 99 percent of records in the EPLS database as of November 2004 did not have DUNS contractor identification numbers. To ensure that excluded contractors do not unintentionally receive new contracts during the period of exclusion, the FAR and NCR require contracting officers and awarding officials to consult EPLS and identify any competing contractors that have been suspended or debarred. Because EPLS lacks unique identifiers for most of the cases for the six agencies we reviewed in depth, contracting officers use the competing contractor’s name to search the system to determine whether a prospective contractor has been excluded from doing business with the federal government. However, a contractor’s name as it appears in a bid or proposal may not be the same as in EPLS. For example, the XYZ Company may submit bids or proposals using “XYZ Company” but appear as “XYZ” in EPLS. Therefore, if the contracting officer searched for an exact match, EPLS would not identify the company. Searching for partial matches would fail to identify companies that have changed their names. According to agency suspension and debarment officials, contracting officers have overlooked excluded contractors when using EPLS, due in part to not being able to match contractor names. Though agency officials could not recall specific cases, they said that this difficulty in matching names is more likely to occur in cases in which contractors have changed their names. We too had difficulty matching names using EPLS. For example, because of the various ways a contractor’s name might be entered in the database and because contractor names sometimes change over time, we could not be assured that we identified all contractors that have been excluded more than once. We also attempted to match contractors’ names in EPLS and FPDS—the database containing government contracting actions—to determine whether excluded contractors had received new contracts during a period of exclusion. Although this effort did not produce any matches, we cannot conclude with confidence that excluded contractors are not receiving new contracts because of the lack of consistency regarding contractor names both between and within the databases. This problem has been longstanding. In our 1987 report, we noted similar difficulties in matching data from the list of excluded parties with FPDS data. Despite our findings, the problem continues, increasing the risk that suspended or debarred contractors will be awarded new contracts during a period of exclusion. The overall reliability of reported data is also a concern. According to GSA officials, responsibility for ensuring data reliability rests with the agencies entering data into EPLS. GSA does not know, however, whether agencies have tested the reliability of their EPLS data. The absence of information on data reliability makes using the system for oversight or analysis problematic. For example, when we attempted to use EPLS to determine the average length of time of exclusions, we found many records with an indefinite termination date. In some cases, parties are listed as excluded for an indefinite period of time pending the outcome of a case. In nonprocurement cases, parties also may be excluded for an indefinite period of time. However, when a record is entered in EPLS without a termination date, the system defaults to record the termination date as indefinite. In the absence of information on data reliability, there is no way to estimate the extent to which the entries with indefinite termination dates reflect parties that had been excluded for an indefinite period of time or parties for which no termination date had been entered. The Interagency Suspension and Debarment Committee (ISDC) is responsible for coordinating policy, practices, and information sharing on various suspension and debarment issues. The ISDC serves as an interagency forum and conducts monthly meetings for federal agencies’ suspension and debarment officials. While ISDC is not a decision-making body, it develops recommendations for the Office of Management and Budget (OMB) on interagency issues, such as determining which agency should take the lead on a case when more than one agency does business with a particular contractor. The ISDC reports to OMB’s Office of Federal Financial Management and has been chaired by EPA’s suspension and debarment officer since 1988. In its March 2002 report on interagency coordination, the ISDC emphasized the importance of identifying a lead agency to coordinate with other federal agencies that do business with a contractor before entering into an administrative agreement. In our discussions with several suspension and debarment officials they said that, in addition, sharing information on past and current administrative agreements within the broader community of suspension and debarment officials would also be useful. They said that when an agency official is considering taking action with respect to a particular contractor, it would be helpful to know whether another agency had ever used an administrative agreement with that contractor, what the terms of the agreement were, and whether the contractor had complied with the agreement. That information is not currently collected centrally nor routinely made available to all suspension and debarment officials. Of the agencies we reviewed, only the Army has taken initiative to share information on administrative agreements. In February 2005, the Army launched the “Army Fraud Fighter’s Web Site,” which includes a list of contractors with which they have entered into administrative agreements. Similarly, greater sharing of information on compelling reason waivers also would be helpful. We found that information on compelling reason waivers was not readily available from most agencies we reviewed. To obtain information on compelling reason waivers, we had to reconcile the information we collected from the DOD agencies with information we collected from GSA for those agencies. The FAR supplement for DOD requires DOD to provide written notice of any compelling reason waiver determination to GSA, but we had to make repeated requests from DOD agencies and GSA in order to obtain complete information. In our view, accountability and transparency of the process would be enhanced were this information routinely collected and reported by all agencies. For example, more information on the use of waivers would allow suspension and debarment officials to evaluate patterns in the use of waivers to determine whether they were used more commonly in some industries than others. They could also assess the rationales cited by agencies in granting waivers to determine whether agencies are applying standards consistently or whether the governmentwide standards are in need of revision. Federal agencies faced with the challenge of ensuring that they only do business with responsible contractors may not be identifying excluded contractors when awarding new contracts. Improving the EPLS database by requiring agencies to enter contractor identification numbers into the system could provide the data needed to enhance agency confidence that excluded contractors can be readily identified. Sharing information among agencies on administrative agreements and compelling reason waivers could also improve the transparency and effectiveness of the suspension and debarment process and thereby help to ensure the government’s interests are protected. To improve the effectiveness of the suspension and debarment process, we are making two recommendations that the Administrator of General Services modify the EPLS database to require contractor identification numbers for all actions entered into the system and the Director of the Office of Management and Budget require agencies to collect and report data on administrative agreements and compelling reason determinations to the Interagency Suspension and Debarment Committee and ensure that these data are available to all suspension and debarment officials. We provided a draft of this report to DOD, EPA, GSA, and OMB for review and comment. DOD provided written comments which are included in appendix V. EPA provided technical comments on the draft, and we have incorporated these comments into the report as appropriate. GSA and OMB provided oral comments. DOD generally concurred with our recommendations. In addition to requiring the contractor identification numbers for all actions entered into the system, DOD believes that the EPLS database should include a field for the Contractor and Government Entity (CAGE) code, if available. DOD stated that given the automated procurement system used by many DOD offices, it is important to enable these offices to check for the CAGE code of a prospective contractor in the EPLS database. DOD also provided technical comments on the draft report, and we have revised the draft accordingly. GSA concurred with our recommendation that GSA modify the EPLS database to require contractor identification numbers for all actions entered into the system. GSA stated that it is in the process of competing the EPLS application, and the identification number will be a required field when the updated system becomes operational in fiscal year 2006. In addition, the updated system will be required to interface with the Central Contractor Registration System, which should improve the quality of contractor data in EPLS. The new system also should have greater capability to allow agencies to report information such as the reasons why a party has been excluded. OMB concurred with our recommendation that OMB require agencies to collect and report data on administrative agreements and compelling reason determinations to the Interagency Suspension and Debarment Committee and make this information available to all suspension and debarment officials. As agreed with your offices, unless you release this report earlier, we will not distribute it until 30 days from the date of this letter. At that time, we will send copies of this report to the Secretary of Defense, the Administrator of General Services, the Administrator of the Environmental Protection Agency, the Director of the Office of Management and Budget, and interested congressional committees. We will also make copies available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report were Amelia Shachoy, Assistant Director, Marie Ahearn, Ken Graffam, Mehrunisa Qayyum, Emma Quach, Jeffrey Rose, Karen Sloan, and Cordell Smith. We conducted our work at six agencies—General Services Administration (GSA), Environmental Protection Agency (EPA), and four DOD agencies— Air Force, Army, Defense Logistics Agency (DLA), and Navy. The DOD agencies were selected on the basis of the dollar value of contracting actions reported in the Federal Procurement Data System (FPDS) for fiscal year 2003—-the year for which the most recent and complete data were available at the time of our review. We selected GSA because of its central role in federal procurement and in maintaining the Excluded Parties List System (EPLS). We selected EPA because of its active role in suspension and debarment, including its role in chairing the Interagency Suspension and Debarment Committee (ISDC) and in implementing systematic procedures for tracking the status of suspension and debarment cases. Together, these agencies accounted for about 67 percent of fiscal year 2003 federal contract spending, as reported in the FPDS. We also reviewed literature and interviewed government and nongovernment officials, academics, and private sector organizations with relevant experience. To describe the general guidance on the suspension and debarment process and how selected agencies have implemented the process, we examined the Federal Acquisition Regulation (FAR), Nonprocurement Common Rule (NCR), and the regulations and guidance of the 24 agencies that have issued supplements to the FAR governing suspension and debarment procedures. We analyzed documents and testimonial evidence at the 6 selected agencies to determine how each agency (a) used administrative agreements; (b) coordinated and shared suspension and debarment information; and (c) collected data to monitor the suspension and debarment process. To identify any needed improvements in the suspension and debarment process, we analyzed data from GSA’s EPLS as of November 18, 2004. This analysis included comparing the EPLS and FPDS databases to identify any suspended or debarred contractors that received a new contract during a period of suspension or debarment. We compared 44,634 records for excluded parties in EPLS with 1,006,919 contractors listed in FPDS at the end of fiscal year 2003, the latest year for which complete data were available at the time of our review. Because EPLS records do not require contractor identification numbers, we compared other identifiers, such as name and address, to determine whether a contract action in FPDS was for the issuance of a new contract during the period of exclusion. We also analyzed the data for the length of time parties are excluded and to determine the extent to which parties are excluded more than once. To assess the reliability of EPLS data we (1) performed electronic testing of the required data elements for obvious errors in accuracy and completeness, (2) reviewed related documentation, and (3) interviewed knowledgeable agency officials. We found the data to be insufficiently reliable for determining whether excluded contractors receive new contracts, for determining the termination dates of exclusions, or for performing simple analyses such as the average length of exclusions or the percentage of parties excluded more than one time. We also reviewed other areas for improvements, such as agencies’ internal data reporting and the role of the ISDC. We conducted our work from August 2004 through June 2005 in accordance with generally accepted government auditing standards. Statutory debarments, or exclusions, are based on statutory, executive order, or regulatory authority other than the FAR. The grounds and procedures for statutory debarments may be set forth in regulations issued by agencies, such as the Department of Labor and EPA, which have enforcement responsibilities but may not be the procuring agencies. The authorities for these statutory debarments use various terminology for exclusion, such as “ineligible,” “prohibited,” or “listing;” however, the terms all encompass sanctions precluding contract awards or involvement in a contract for a specific period of time. Table 4 lists the authorities identified in GSA’s EPLS as reasons for debarring individuals and contractors from receiving federal contracts. The FAR and NCR require agencies to establish a process for suspension and debarment. The organizational structure established to manage the process at the six agencies we reviewed is summarized in table 5. | Federal government purchases of contracted goods and services have grown to more than $300 billion annually. To protect the government's interests, the Federal Acquisition Regulation (FAR) provides that agencies can suspend or debar contractors for causes affecting present responsibility--such as serious failure to perform to the terms of a contract. The FAR provides flexibility to agencies in developing a suspension or debarment process. GAO was asked to (1) describe the general guidance on the suspension and debarment process and how selected agencies have implemented the process, and (2) identify any needed improvements in the suspension and debarment process. We examined the FAR and the regulations of 24 agencies that have FAR supplements governing suspension and debarment procedures. We selected 6 defense and civilian agencies representing about 67 percent of fiscal year 2003 federal contract spending for in-depth review. The FAR prescribes policies governing the circumstances under which contractors may be suspended or debarred, the standards of evidence that apply to exclusions, and the usual length of these exclusions. To implement these policies, 24 agencies developed FAR supplementation. In fiscal year 2004, the 6 agencies we reviewed in depth suspended a total of 262 parties and debarred a total of 590 parties. Five agencies entered into a total of 38 administrative agreements, which permit contractors that meet certain agency-imposed requirements to remain eligible for new contracts. Agency officials said that such agreements can help improve contractor responsibility, ensure compliance through monitoring, and maintain competition. In certain circumstances, agencies can continue to do business with excluded contractors, such as when there is a compelling need for an excluded contractor's service or product. In fiscal year 2004, two of the agencies we reviewed in depth--the Air Force and the Army--issued compelling reason waivers to continue doing business with excluded parties. To help ensure excluded contractors do not unintentionally receive new contracts during the period of exclusion, the FAR requires contracting officers to consult the Excluded Parties List System (EPLS)--a governmentwide database on exclusions--and identify any competing contractors that have been suspended or debarred. However, the data in EPLS may be insufficient for this purpose. For example, as of November 2004, about 99 percent of records in EPLS for the 6 agencies we reviewed in depth did not have contractor identification numbers--a unique identifier that enables agencies to conclude confidently whether a contractor has been excluded. In the absence of these numbers, agencies use the company's name to search EPLS, which may not identify an excluded contractor if the contractor's name has changed. Further, information on administrative agreements and compelling reason determinations is not routinely shared among agencies. Such information could help agencies in their exclusion decisions and promote greater transparency and accountability. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The management of used electronics presents a number of environmental and health concerns. EPA estimates that only 15 to 20 percent of used electronics (by weight) are collected for reuse and recycling, and that the remainder of collected materials is primarily sent to U.S. landfills. While a survey conducted by the consumer electronics industry suggests that EPA’s data may underestimate the recycling rate, the industry survey confirms that the number of used electronics thrown away each year is in the tens of millions. As a result, valuable resources contained in electronics, including copper, gold, and aluminum, are lost for future use. Additionally, while modern landfills are designed to prevent leaking of toxic substances and contamination of groundwater, research shows that some types of electronics have the potential to leach toxic substances with known adverse health effects. Used electronics may also be exported for recycling or disposal. In August 2008, we reported that, while such exports can be handled responsibly in countries with effective regulatory regimes and by companies with advanced technologies, a substantial amount ends up in countries that lack the capacity to safely recycle and dispose of used electronics. We also have previously reported on the economic and other factors that inhibit recycling and reuse. For example, many recyclers charge fees because their costs exceed the revenue they receive from selling recycled commodities or refurbishing units. Household electronics, in particular, are typically older and more difficult to refurbish and resell, and, thus, may have less value than those from large institutions. In most states, it is easier and cheaper for consumers to dispose of household electronics at a local landfill. Moreover, as EPA and others have noted, the domestic infrastructure to recycle used electronics is limited, and the major markets for both recycled commodities and reusable equipment are overseas. The United States does not have a comprehensive national approach for the reuse and recycling of used electronics, and previous efforts to establish a national approach have been unsuccessful. Under the National Electronics Product Stewardship Initiative, a key previous effort that was initially funded by EPA, stakeholders met between 2001 and 2004, in part to develop a financing system to facilitate reuse and recycling. Stakeholders included representatives of federal, state, and local governments; electronics manufacturers, retailers, and recyclers; and environmental organizations. Yet despite broad agreement in principle, stakeholders in the process did not reach agreement on a uniform, nationwide financing system. For example, they did not reach agreement on a uniform system that would address the unique issues related to televisions, which have longer life spans and cost more to recycle than computers. In the absence of a national approach, some states have since addressed the management of used electronics through legislation or other means, and other stakeholders are engaged in a variety of voluntary efforts. In the 9 years that have passed since stakeholders initiated the National Electronics Product Stewardship Initiative in an ultimately unsuccessful attempt to develop a national financing system to facilitate the reuse and recycling of used electronics, 23 states have enacted some form of electronics recycling legislation. For example, some of these state laws established an electronics collection and recycling program and a mechanism for funding the cost of recycling (see fig. 1). The state laws represent a range of options for financing the cost of recycling and also differ in other respects, such as the scope of electronic devices covered under the recycling programs, with televisions, laptop computers, and computer monitors frequently among the covered electronic devices. Similarly, while the state laws generally cover used electronics generated by households, some laws also cover used electronics generated by small businesses, charities, and other entities. Five of the states—California, Maine, Minnesota, Texas, and Washington— represent some of the key differences in financing mechanisms. California was early to enact legislation and is the only state to require that electronics retailers collect a recycling fee from consumers at the time of purchase of a new electronic product covered under the law. These fees are deposited into a fund managed by the state and used to pay for the collection and recycling of used electronics. In contrast, the other four states have enacted legislation making manufacturers selling products in their jurisdictions responsible for recycling or for some or all of the recycling costs. Such laws are based on the concept of “producer responsibility” but implement the concept in different ways. In Maine, state-approved consolidators of covered used electronics bill individual manufacturers, with the amount billed for particular electronics being based in part either on the manufacturer’s market share of products sold or on the share of used electronics collected under the state’s program. Under the Minnesota law, manufacturers either must meet recycling targets by arranging and paying for the collection and recycling of an amount in weight based on a percentage of their sales or must pay recycling fees. Texas requires that manufacturers establish convenient “take-back” programs for their own brands of equipment. Finally, the Washington law requires that manufacturers establish and fund collection services that meet certain criteria for convenience, as well as transportation and recycling services. Table 1 summarizes the key characteristics of the electronics recycling legislation in these five states. As of June 2010, the remaining 27 states had not enacted legislation to establish electronics recycling programs. In some of these states, legislation concerning electronics recycling has been proposed, and some state legislatures have established commissions to study options for the management of used electronics. In addition, some of these states, as well as some of the states with recycling legislation, have banned certain used electronics, such as CRTs, from landfills. In states with no mechanism to finance the cost of recycling, some local governments that offer recycling bear the recycling costs and others charge fees to consumers. Also, some states have funded voluntary recycling efforts, such as collection events or related efforts organized by local governments. For example, Florida has provided grants to counties in the state to foster the development of an electronics recycling infrastructure. A variety of entities offer used electronics collection services, either for a fee or at no charge. Localities may organize collection events or collect used electronics at waste transfer stations. A number of electronics manufacturers and retailers support collection events and offer other services. For example, Best Buy offers free recycling of its own branded products and drop-off opportunities for other products at a charge that is offset by a store coupon of the same value; Dell and Goodwill Industries have established a partnership to provide free collection services at many Goodwill donation centers; and a number of electronics manufacturers collect used electronics through mail-back services offered to consumers. Some manufacturers and retailers also have made voluntary commitments to manage used electronics in an environmentally sound manner and to restrict exports of used electronics that they collect for recycling. EPA has taken some notable steps to augment its enforcement of regulations on exports of CRTs for recycling, but the export of other used electronics remains largely unregulated. In addition, the effect of EPA’s partnership programs on the management of used electronics, although positive, is limited or uncertain. To encourage the recycling and reuse of used CRTs, EPA amended its hazardous waste regulations under the Resource Conservation and Recovery Act by establishing streamlined management requirements. If certain conditions are met, the regulations exclude CRTs from the definition of solid waste and thereby from the regulations that apply to the management of hazardous waste. The conditions include a requirement that exporters of used CRTs for recycling notify EPA of an intended export before the shipments are scheduled to leave the United States and obtain consent from the importing country. In contrast, exporters of used, intact CRTs for reuse (as opposed to recycling) may submit a one-time notification to EPA and are not required to obtain consent from the importing country. The export provisions of the CRT rule became effective in January 2007. We reported in August 2008 that some companies had appeared to have easily circumvented the CRT rule, and that EPA had done little to enforce it. In particular, we posed as foreign buyers of broken CRTs, and 43 U.S. companies expressed a willingness to export these items. Some of the companies, including ones that publicly touted their exemplary environmental practices, were willing to export CRTs in apparent violation of the CRT rule. Despite the apparently widespread potential for violations, EPA did not issue its first administrative penalty complaint against a company for potentially illegal shipments until the rule had been in effect for 1½ years, and that penalty came as a result of a problem we had identified. In response to our prior report, EPA officials acknowledged some instances of noncompliance with the CRT rule but stated that, given the rule’s relative newness, their focus was on educating the regulated community. Since our prior report’s issuance, however, EPA has initiated investigations and taken several enforcement actions against companies that have violated the notice-and-consent requirement for export of CRTs for recycling. For example, in December 2009, the agency issued an order seeking penalties of up to $37,500 per day to a company that failed to properly manage a shipment of waste CRTs. According to EPA, the company did not provide appropriate notice to the agency or to China, the receiving country, where customs authorities rejected the shipment. Similarly, in December 2009, EPA announced that two companies that failed to notify the agency or receive written consent from China of a shipment of waste CRTs for recycling entered agreements with EPA, with one company agreeing to pay a fine of over $21,000. Despite steps to strengthen enforcement of the CRT rule, issues related to CRT exports and to exports of other used electronics remain. First, as we reported in August 2008, exports of CRTs for reuse in developing countries have sometimes included broken units that are instead dumped. EPA’s CRT rule does not allow such exports and requires that exporters keep copies of normal business records, such as contracts, demonstrating that each shipment of exported CRTs will be reused. However, the rule does not require exporters to test used equipment to verify that it is functional. Moreover, according to EPA, the agency has focused its investigations under the CRT rule on companies that have failed to provide export notifications altogether. In contrast, the agency has not yet conducted any follow-up on notifications of exports for reuse to protect against the dumping of nonworking CRTs in developing countries by ensuring that the CRTs companies are exporting are, in fact, suitable for reuse. Second, CRTs are the only electronic devices specifically regulated as hazardous waste under EPA’s Resource Conservation and Recovery Act regulations. Many other electronic devices, however, contain small amounts of toxic substances, and according to EPA, recent studies have shown that certain used electronics other than CRTs, such as some cell phones, sometime exceed the act’s regulatory criteria for toxicity when evaluated using hazardous waste test protocols. Finally, because one of the purposes of the Resource Conservation and Recovery Act is to promote reuse and recovery, EPA’s rules under the act exclude used electronics and disassembled component parts that are exported for reuse from the definition of “solid waste” and, therefore, from hazardous waste export requirements, regardless of whether the used electronics exceed the toxicity characteristic regulatory criteria. EPA has worked with electronics manufacturers, retailers, recyclers, state governments, environmental groups, and other stakeholders to promote partnership programs that address the environmentally sound management of used electronics. In addition, EPA comanages a program to encourage federal agencies and facilities to purchase environmentally preferable electronics and manage used electronics in an environmentally sound manner. Key programs include the following: Responsible Recycling practices. EPA convened electronics manufacturers, recyclers, and other stakeholders and provided funding to develop the Responsible Recycling (R2) practices, with the intent that electronics recyclers could obtain certification that they are voluntarily adhering to environmental, worker health and safety, and security practices. Certification to the R2 practices became available in late 2009. According to EPA officials, the R2 practices represent a significant accomplishment in that they provide a means for electronics recyclers to be recognized for voluntary commitments that, according to EPA, go beyond what the agency is able to legally require. The R2 practices identify “focus materials” in used electronics, such as CRTs or items containing mercury, that warrant greater care due to their toxicity or other potential adverse health or environmental effects when managed without the appropriate safeguards. The practices specify that recyclers (and each vendor in the recycling chain) export equipment and components containing focus materials only to countries that legally accept them. The practices also specify that recyclers document the legality of such exports. Upon request by exporters, EPA has agreed to help obtain documentation from foreign governments regarding whether focus materials can be legally imported into their country. Plug-In To eCycling. To promote opportunities for individuals to donate or recycle their used consumer electronics, EPA began to partner with electronics manufacturers, retailers, and mobile service providers in 2003. Under the Plug-In To eCycling program, partners commit to ensuring that the electronics refurbishers and recyclers they use follow guidelines developed by EPA for the protection of human health and the environment. Among other things, the current guidelines call for minimizing incineration and landfill disposal and for ensuring that exports comply with requirements in importing countries. According to EPA, Plug-In To eCycling partners have collected and recycled steadily increasing quantities of used electronics, and some partners have expanded the collection opportunities they offer to consumers (e.g., from occasional events to permanent locations). Electronic Product Environmental Assessment Tool. Developed under a grant from EPA and launched in 2006, the Electronic Product Environmental Assessment Tool (EPEAT) helps purchasers select and compare computers and monitors on the basis of their environmental attributes. EPEAT evaluates electronic products against a set of required and optional criteria in a number of categories, including end-of-life management. To qualify for registration under EPEAT, the sale of all covered products to institutions must include the option to purchase a take-back or recycling service that meets EPA’s Plug-In To eCycling recycling guidelines. Auditing of recycling services against the guidelines is an optional criterion. Currently, EPA is participating with other stakeholders in the development of additional standards covering televisions and imaging equipment, such as copiers and printers. Federal Electronics Challenge. To promote the responsible management of electronic products in the federal government, EPA comanages the Federal Electronics Challenge, a program to encourage federal agencies and facilities to purchase environmentally preferable electronic equipment, operate the equipment in an energy-efficient way, and manage used electronics in an environmentally sound manner. According to EPA, partners reported in 2009 that 96 percent of the computer desktops, laptops, and monitors they purchased or leased were EPEAT-registered, and that 83 percent of the electronics they took out of service were reused or recycled. One of the national goals of the Federal Electronics Challenge for 2010 is that 95 percent of the eligible electronic equipment purchased or leased by partnering agencies and facilities be registered under EPEAT. Another goal is that 100 percent of the non-reusable electronic equipment disposed of by partners be recycled using environmentally sound management practices. While EPA and other stakeholders have contributed to progress in the partnership programs, the impact of the programs on the management of used electronics is limited or uncertain. For example, the Plug-In To eCycling program does not (1) include a mechanism to verify that partners adhere to their commitment to manage used electronics in accordance with EPA’s guidelines for the protection of human health and the environment or (2) confirm the quantity of used electronics collected under the program. In addition, because the development of electronics purchasing and recycling standards is ongoing or only recently completed, it is too soon to determine how the standards will affect the management of used electronics collected from consumers. EPA officials told us that the agency lacks the authority to require electronics recyclers to adhere to the R2 practices, since most electronics are not hazardous waste under Resource Conservation and Recovery Act regulations. EPA participated in the development of the practices through a process open to a range of stakeholders concerned with the management of used electronics. Two environmental groups that participated in the process withdrew their support because the R2 practices failed to address their concerns (e.g., about the export of used electronics). As a result, one of the groups, the Basel Action Network, spearheaded the development of another standard (i.e., e-Stewards®) under which electronics recyclers may be certified on a voluntary basis. EPA is currently considering whether and how to reference such recycler certification standards in other programs, such as Plug-In To eCycling. Furthermore, EPEAT currently focuses on electronic products sold to institutions but not to individual consumers. In particular, the requirement that manufacturers of EPEAT-registered computers and monitors offer a take-back or recycling service to institutional purchasers does not currently apply to sales to individual consumers. According to an EPA official participating in development of the standards, EPA and other stakeholders plan to begin work in 2010 on expanding the standard for computer equipment into the consumer marketplace, and stakeholders are still discussing whether the new EPEAT standards for imaging equipment and televisions, which will cover electronics sold to individual consumers, will include a required or optional criterion for take back of such electronics. In October 2009, we reported that an increasing number of federal agencies and facilities has joined the Federal Electronics Challenge, but we also identified opportunities for higher levels of participation and noted that agencies and facilities that participate do not maximize the environmental benefits that can be achieved. We reported, for example, that agencies and facilities representing almost two-thirds of the federal workforce were not program partners, and that only two partners had reported to EPA that they managed electronic products in accordance with the goals for all three life-cycle phases—procurement, operation, and disposal. We concluded that the federal government, which purchases billions of dollars worth of information technology equipment and services annually, has the opportunity to leverage its substantial market power to enhance recycling infrastructures and stimulate markets for environmentally preferable electronic products by broadening and deepening agency and facility participation in the Federal Electronics Challenge. However, EPA has not systematically analyzed the agency’s partnership programs, such as the Federal Electronics Challenge, to determine whether the impact of each program could be augmented. To varying degrees, the entities regulated under the state electronics recycling laws—electronics manufacturers, retailers, and recyclers— consider the increasing number of laws to be a compliance burden. In contrast, in the five states we visited, state and local solid waste management officials expressed varying levels of satisfaction with individual state recycling programs, which they attributed more to the design and implementation of the programs, rather than to any burden caused by the state-by-state approach. (See app. II for a description of key elements of the electronics recycling programs in the five states.) Electronics manufacturers, retailers, and recyclers described various ways in which they are affected by the current state-by-state approach toward the management of used electronics, with manufacturers expressing the greatest concern about the lack of uniformity. The scope of manufacturers regulated under state electronics recycling laws, as well as how states define “manufacturer,” varies by state. The laws apply to both multinational corporations as well as small companies whose products may not be sold in every state and, depending on the law, to manufacturers of both information technology equipment and televisions. In some states, such as Maine and Washington, the number of regulated manufacturers is over 100. Because most state electronics recycling laws are based on the producer responsibility model, these laws, by design, assign manufacturers significant responsibility for financing and, in some states, for arranging the collection and recycling of used electronics. As a result, the two electronics manufacturer associations we interviewed, as well as eight of the nine individual manufacturers, told us that the state-by-state approach represents a significant compliance burden. The individual manufacturer that did not consider the state-by-state approach to be a significant burden explained that the company is not currently manufacturing covered electronic devices (specifically televisions) and, therefore, does not have responsibilities under most of the state laws. Depending on the specific provisions of state laws, examples of the duplicative requirements that individual manufacturers described as burdensome included paying annual registration fees to multiple state governments, submitting multiple reports to state environmental agencies, reviewing and paying invoices submitted by multiple recyclers, and conducting legal analyses of state laws to determine the responsibilities placed on manufacturers. A representative of a manufacturer of information technology equipment said that, due to the time needed to ensure compliance with differing state laws, the company cannot spend time on related activities, such as finding ways to reduce the cost of complying with the state laws or ensuring that electronics are recycled in an environmentally sound manner. Representatives of one manufacturer noted that even states with similar versions of producer responsibility legislation differ in terms of specific requirements, such as the scope of covered electronic devices, registration and reporting deadlines, and the types of information to be submitted. As a result, they said that they needed to conduct separate compliance efforts for each state, rather than implement a single compliance program. A few manufacturers also told us that their current compliance costs are in the millions of dollars and are increasing as more states enact electronics recycling legislation. For example, a Sony representative said that he expects the amount the company spends in 2010 to comply with the requirements in states with producer responsibility laws to increase almost sevenfold over the amount spent in 2008. While the producer responsibility model is based on the assumption that manufacturers pass along the cost of recycling to consumers in the form of higher prices, the electronics manufacturer associations, as well as individual manufacturers, described inefficiencies and higher costs created by the state-by-state approach that they said could be reduced through a uniform national approach. For example, the Consumer Electronics Association cited a 2006 report, which the association helped fund, on the costs that could be avoided under a hypothetical, single national approach. The report estimated that, with 20 different state programs, manufacturers would spend an additional $41 million each year, and that the total additional annual costs among all stakeholders— including manufacturers, retailers, recyclers, and state governments— would be about $125 million. Both the Consumer Electronics Association, most of whose members the association considers to be small electronics manufacturers, as well as the Information Technology Industry Council, which represents large manufacturers, told us that some provisions of state laws—such as registration fees that do not take into account the number of covered electronic devices sold in a state—can create a disproportionate burden on small manufacturers. For example, Maine’s law imposes a $3,000 annual registration fee on all manufacturers, regardless of size or sales volume. One small manufacturer told us that Maryland’s initial registration fee of $10,000 exceeded the company’s $200 profits from sales in the state. The manufacturer said that, if all 50 states imposed such fees, the company would not remain in business. Similarly, the need to analyze differing requirements in each state law requires staff resources that, unlike their larger counterparts, small manufacturers may lack. Despite the costs of complying with state electronics recycling legislation, representatives of the two electronics manufacturer associations we interviewed, as well as most of the individual manufacturers, told us that state laws based on the producer responsibility model have not led to the design of electronic products that are less toxic and more recyclable, which some states cite as one of the purposes for making manufacturers responsible for the management of used electronics. Manufacturers cited the following reasons for the lack of an impact on product design: the inability of manufacturers to anticipate how recycling practices and technologies may develop over time and incorporate those developments into the design of products that may be discarded only after years of use; some producer responsibility laws, such as in Minnesota and Washington, making individual manufacturers responsible for recycling not their own products but a general category of devices, including those designed by other manufacturers; and the greater impact of other factors on product design, such as consumer demand and the use by institutional purchasers of EPEAT to select and compare electronic devices on the basis of their environmental attributes. Retailers generally affected by state electronics recycling laws include national chains as well as small electronics shops. Some retailers, such as Best Buy, sell their own brand of covered electronic devices and are also classified as manufacturers under certain states’ laws. As an example of the number of retailers covered under the laws, information from the state of California indicates that over 15,000 retailers have registered to collect the state’s recycling fee, and state officials estimated that large retailers collect 80 percent of the revenues. While the requirements imposed by state electronics recycling legislation on retailers typically are less extensive than the requirements pertaining to manufacturers, representatives of national and state retail associations we interviewed, as well as individual electronics retailers, described ways that the state-by-state approach creates a compliance burden. For example, according to the Consumer Electronics Retailers Coalition, certain state requirements, such as prohibitions on selling the products of electronics manufacturers that have not complied with a state’s law, are difficult for large retailers to implement since they do not use state-specific networks for distributing products to their stores. Rather, electronic products are developed, marketed, and sold on a national and even global basis. Similarly, representatives of the Consumer Electronics Retailers Coalition, as well as the majority of individual retailers and state retail associations in the five states we visited, told us that state “point-of-sale” requirements to collect a fee (in California) or distribute information on recycling when consumers purchase an electronic product represents a burden (e.g., many retailers operate their point-of-sale systems out of a centralized location yet are required to meet differing requirements in each state). Some retailers also expressed concern that states have difficulty in enforcing requirements on Internet retailers and, as a result, that businesses with a physical presence in the state are disadvantaged. This point is supported by the Maine Department of Environmental Protection, which has indicated that the department lacks sufficient staff to ensure that retailers that sell exclusively on the Internet comply with the sales ban on products from noncompliant manufacturers. Retailers also expressed concerns over specific provisions of individual state laws. For example, representatives of the California Retailers Association said their members consider the state’s requirement to collect a recycling fee at the point of sale and remit the fee to the state to be particularly burdensome, even though the law allows retailers to retain 3 percent of the fee as reimbursement for their costs. One retailer explained that collecting the fee also generates resentment against the retailer among customers who are unaware of the state’s recycling law. Similarly, according to the Minnesota Retailers Association, retailers found it challenging to gather and report accurate sales data required to calculate manufacturer recycling targets under the state’s law. In response to concerns over collecting and reporting sales data, Minnesota amended its law to eliminate this requirement and to use national sales data instead. Retailers that sell their own brand of covered electronic devices and are classified as manufacturers under a particular state’s law must meet all requirements imposed on either type of entity. Similarly, Best Buy and other retailers that offer customers a take-back service for used electronics are considered authorized collectors under some state programs and, as a result, are subject to additional registration and reporting requirements. Best Buy officials told us they face unique challenges under the state-by-state approach because they participate in programs as a retailer; a manufacturer; and, in some cases, a collector. For example, the officials cited 47 annual reporting and registration deadlines to comply with requirements imposed on manufacturers, 19 annual reporting or review dates associated with retailer requirements, and 6 annual reporting or registration dates associated with collector requirements. Electronics recyclers range from large multinational corporations to small entities with a location in one state and encompass a range of business models. For example, some recyclers focus on “asset disposition”—that is, providing data destruction and computer refurbishment services to businesses and large institutions—and other recyclers focus on recovering valuable commodities, such as precious metals. The use of “downstream” vendors to process various components separated from electronics is common, and many of the downstream entities, such as those that recycle glass from CRTs, are located overseas. Numerous nonprofit organizations refurbish used computers for use by schools, low-income families, and other nonprofit organizations both in the United States and overseas. The degree to which the recyclers we interviewed expressed concerns about the state-by-state approach varied. While state laws have established differing registration, reporting, and record-keeping requirements for recyclers and, where specified, different methods of payment for the cost of recycling or collection, some recyclers said they are not generally impacted by such differences (e.g., they operate in only one state with electronics recycling legislation or they can cope with differing state requirements for environmentally sound management by adhering to the most stringent requirements). One recycler even pointed out that the existence of various state laws can create business opportunities. In particular, rather than attempt to develop their own programs to comply with differing state requirements, manufacturers may decide to contract with recyclers that may have greater familiarity with the provisions of different laws. In contrast, other recyclers expressed concern over the burden of meeting the requirements of differing state laws. Due to the differences among state laws and the programs as implemented, these recyclers may have to carry out different tasks in each state to be reimbursed, such as counting and sorting covered electronic devices by brand and invoicing individual manufacturers; marketing and selling the amount of used electronics they have processed to manufacturers that must meet recycling targets; and, in California, submitting recycling payment claims to the state government. One recycler told us that the differences among state laws create a disincentive for establishing operations in other states, while another mentioned how small variations among state laws can significantly affect a recycler’s capacity to do business in a state. Another recycler added that the state-by-state approach hinders the processing of large volumes of used electronics from households and the ability to generate economies of scale that would reduce recycling costs. Almost all of the electronics recyclers we interviewed, including those in each of the five states we studied in detail, told us that they are concerned about the ability of irresponsible recyclers to easily enter and undercut the market by charging low prices without processing the material in an environmentally sound manner. While such undercutting might persist even under a national approach to managing used electronics, the recyclers identified a number of factors in the state-by-state approach that magnify the problem, including their perception of a lack of enforcement by state environmental agencies. In addition, according to recyclers in California and Washington, some recyclers export—rather than domestically recycle—electronic devices not covered under the state laws, which is less costly and thereby gives them a competitive advantage over recyclers that do not engage in exports, even where legal. Some recyclers and refurbishers of used electronics told us that state laws foster recycling at the expense of reuse, even though refurbishment and reuse is viewed by EPA as being more environmentally friendly than recycling. Specifically, according to these stakeholders, some state programs focus on collecting and recycling used electronics but not refurbishing them, thereby creating a financial incentive to recycle used electronics that could otherwise be refurbished and reused. For example, in Minnesota, only the amount in weight of collected used electronics that is recycled counts toward manufacturers’ performance targets. According to one refurbisher in the state, this provision leads to the recycling of equipment that is in working condition and reusable. Similarly, California pays for the cost of collecting and recycling used electronics but not for refurbishment. In contrast, according to a Texas affiliate of Goodwill Industries that recycles and refurbishes used electronics, the state’s law promotes reuse of used electronics. For example, by requiring that manufacturers establish take-back programs but not setting recycling targets, the Texas law avoids creating an incentive to recycle used electronics that can be refurbished. In the five states that we selected for detailed review, state and local government officials expressed varying levels of satisfaction with their electronics recycling laws. In addition, while some state and local governments had participated in the National Electronics Product Stewardship Initiative in an attempt to develop a national financing system for electronics reuse and recycling, the state and local officials we interviewed generally said that the state-by-state approach had not hindered the successful implementation of electronics recycling programs in their jurisdictions. Rather, they attributed their level of satisfaction to the design of the programs, such as the degree to which the programs provide a financing source for collecting and recycling used electronics and the effectiveness of efforts to educate consumers. None of the five states had statewide data on collection rates prior to implementation of the electronics recycling programs to quantify the impact of the laws, but state and local officials provided a variety of anecdotal information to illustrate the laws’ impact, such as collection rates in local communities and trends in the dumping of used electronics on roadsides and other areas. Moreover, the experiences described by state and local officials in the five states illustrate that both general financing models—producer responsibility and a recycling fee paid by consumers—are viable and have the potential to ensure convenient collection opportunities. Local solid waste management officials in the five states we visited expressed varying levels of satisfaction with state electronics recycling legislation in terms of reducing their burden of managing used electronics. On one hand, local officials in Washington told us that the state’s law requiring that manufacturers establish a convenient collection network for the recycling of used electronics has been successful in increasing collection opportunities and relieving local governments of recycling costs. Similarly, local officials in California said the state’s use of a recycling fee for reimbursing collection and recycling costs had relieved their governments of the burden of managing used electronics by making it profitable for the private sector to provide collection and recycling services. On the other hand, according to local solid waste management officials in Texas, the lack of specific criteria in the provision of the state’s law requiring that manufacturers collect their own brands of used computer equipment limited the law’s impact on increasing the convenience of collection opportunities. In addition, the officials said the state government had not done enough to educate residents about the law. As a result, they said that local governments were still bearing the burden of managing used computer equipment. State and local solid waste management officials we interviewed from three states without electronics recycling legislation also expressed varying levels of satisfaction with their voluntary efforts to promote recycling under the state-by-state approach to managing used electronics. For example, a county hazardous waste coordinator in Florida said the county used funding from the state to establish an electronics recycling program that is self-sustaining and free to households, but he also said that the state-by-state approach is cumbersome. Similarly, Florida state officials said that every state county has recycling opportunities, although collection could be more convenient. A representative of the Association of State and Territorial Solid Waste Management Officials said that, without a mechanism to finance the cost of recycling used electronics, local governments that provide recycling opportunities may be bearing the cost of providing such services, which can impose a financial burden on communities. In addition, while most of the state and local officials we interviewed from states without legislation said that the state-by-state approach does not represent a burden, Arizona state officials pointed out an increased burden of ensuring the environmentally sound management of used electronics collected in a neighboring state (California) and shipped to their state, since California has an electronic waste law, but Arizona does not. While state environmental officials we interviewed agreed that the burden of the state-by-state approach falls primarily on the regulated industries, they also acknowledged a number of aspects of the state-by-state approach that limit or complicate their own efforts, including the following: The need to ensure that state programs do not pay for the recycling of used electronics from out of state. In California, where the state reimburses recyclers $0.39 per pound for the cost of collecting and recycling covered electronic devices, state environmental officials said that they have regularly denied 2 to 5 percent of the claims submitted by recyclers due to problems with documentation, and that some portion of the denied claims likely represents fraudulent claims for the recycling of used electronics collected from other states. To prevent the recycling fee paid by consumers in the state from being used to finance the cost of recycling used electronics from other states, California requires that collectors of used electronics (other than local governments or their agents) maintain a log that includes the name and address of persons who discard covered electronic devices, and the state checks the logs to ensure that it pays only for the recycling of devices generated within the state. California state officials responsible for implementing the electronics recycling legislation said that the time spent on ensuring this requirement is met is a significant contributor to their workload. State and local government officials in other states we visited also acknowledged the potential for their programs to finance the recycling of used electronics collected from out of state, but these officials did not consider the problem to be widespread or difficult to address. For example, a Maine official said that, as a standard practice, waste collection facilities in the state check the residency of individuals, including when the facilities collect used electronics for recycling. Ability to ensure compliance with state requirements for environmentally sound management. State environmental officials in the five states we visited described varying levels of oversight to ensure the environmentally sound management of used electronics collected under their programs. For example, California conducts annual inspections of recyclers approved under the state program. Among other things, the state’s inspection checklist covers the packaging and labeling of electronic devices, the training of personnel on how to handle waste, the tracking of waste shipments, and the procedures and protective equipment needed to manage the hazards associated with the treatment of electronic devices. In contrast, citing limited resources, officials in Minnesota said they rely on spot checks of large recyclers, and officials in Texas said they have prioritized regular, scheduled enforcement of other environmental regulations over the requirements adopted by the state for the recycling of electronics. Even in California, state officials said that their ability to ensure the environmentally sound management of waste shipped out of state is limited because, while covered devices must be dismantled in California to be eligible for a claim within the state’s payment system, residuals from the in-state dismantling and treatment of covered devices may be shipped out of state. Intact but noncovered electronic devices are not subject to the California program and hence may also be shipped out of state. The problem is exacerbated because many of the “downstream” vendors used to process materials separated from electronics are located overseas, which further limits the ability of state officials to ensure that recyclers are conducting due diligence on downstream vendors and that the materials are being managed in an environmentally sound manner. (See app. II for additional information on the requirements for environmentally sound management in the five states we studied in detail.) In each of the five states we visited, state environmental nonprofit organizations either advocated for the enactment of state electronics recycling legislation or have been active in tracking the implementation of the laws. In addition, a number of groups advocate on issues related to the management of used electronics on a national or international basis. For example, the Electronics TakeBack Coalition, which includes a number of nonprofit organizations, advocates for producer responsibility as a policy for promoting responsible recycling in the electronics industry, and the Basel Action Network works in opposition to exports of toxic wastes to developing counties. Like state and local government officials in the five states we visited, state environmental groups we interviewed described the design of the state recycling programs, rather than the state-by-state approach, as the primary factor in the success of the programs. Representatives of the state environmental groups in four of the five states—California, Maine, Minnesota, and Washington—said that they considered their state program to have been successful in providing convenient collection opportunities and in increasing the collection rates of used electronics. For example, citing a 2007 survey of Maine municipalities, a representative of the Natural Resources Council of Maine said that the collection opportunities under the state program are more convenient than anticipated, although convenience could be improved for some state residents. Similarly, a representative of Californians Against Waste said that the state’s recycling fee had resulted in convenient collection opportunities and in steadily increasing collection rates, and that a recycling fee paid by consumers is no less effective than the producer responsibility model in promoting the collection of used electronics. In contrast, echoing the results of a 2009 survey conducted by the organization, a Texas Campaign for the Environment representative said that the state’s law had not had a significant impact on the collection and recycling of used electronics, because both consumers and local solid waste management officials are unaware of the opportunities manufacturers are to provide under the law for the free collection and recycling of electronics discarded by households. In addition, the organization is critical of the fact that the Texas law does not cover televisions, and that the governor vetoed a bill that would have made television manufacturers responsible for recycling, including costs. Some environmental groups pointed out that, in and of itself, the ability of a state program to improve collection rates does not necessarily ensure that used electronics will be recycled in an environmentally sound manner. Key issues raised by environmental groups as complicating the effectiveness of state programs included a lack of adequate requirements for the environmentally sound management of used electronics or requirements that differ among states, limited state resources or oversight to ensure compliance with the requirements, and a lack of authority to address concerns about exports. For example, a representative of the Basel Action Network said that provisions in state laws regarding exports, such as those in California, could be challenged on constitutional grounds since the Constitution generally gives the federal government the authority to regulate commerce with foreign nations, thereby limiting states’ authorities to do so. Options to further promote the environmentally sound management of used electronics involve a number of basic policy considerations and encompass many variations. For the purposes of this report, we examined two endpoints on the spectrum of variations: (1) a continued reliance on state recycling programs supplemented by EPA’s partnership programs and (2) the establishment of federal standards for state electronics recycling programs. Further federal regulation of electronic waste exports is a potential component of either of these two approaches. Under a national approach for managing used electronics on the basis of a continuation of the current state-by-state approach, EPA’s partnership programs, such as Plug-In To eCycling, would supplement state efforts. Most used electronics would continue to be managed as solid waste under the Resource Conservation and Recovery Act, with a limited federal role. For example, beyond its establishment of minimum standards for solid waste landfills, EPA is authorized to provide technical assistance to state and local governments for the development of solid waste management plans and to develop suggested guidelines for solid waste management. EPA’s partnership programs can supplement state recycling efforts in a variety of ways. For example, Minnesota state environmental officials told us that they hope to incorporate the R2 practices into the state’s standards for the environmentally sound management of used electronics. However, as we have previously noted, the impact of the EPA’s promotion of partnership programs on the management of used electronics is limited or uncertain. Moreover, EPA does not have a plan for coordinating its efforts with state electronics recycling programs or for articulating how EPA’s partnership programs, taken together, can best assist stakeholders to achieve the environmentally sound management of used electronics. For example, while partnership programs such as Plug-In To eCycling can complement state programs, EPA does not have a plan for leveraging such programs to promote recycling opportunities in states without electronics recycling legislation. Among the key implications of a continuation of the state-by-state approach are a greater flexibility for states and a continuation of a patchwork of state recycling efforts, including some states with no electronics recycling requirements. Greater flexibility for states. This approach provides states with the greatest degree of flexibility to engage in recycling efforts that suit their particular needs and circumstances, whether through legislation or other mechanisms, such as grants for local communities. For example, according to local solid waste management officials in Texas, which has enacted electronics recycling legislation, the state has not banned the disposal of electronics in landfills, and the officials pointed to factors, such as the state’s landfill capacity, that would work against a landfill ban. In contrast, New Hampshire, which has limited landfill capacity, has banned the disposal of certain electronics in landfills but has not enacted a law that finances the recycling of used electronics. The state’s solid waste management official told us that the state’s approach had been successful in diverting a large amount of used electronics from disposal in landfills, and an official with the state’s municipal association told us that residents of the state accept that they must pay fees to cover the cost of waste disposal services, including electronics recycling. A state-by-state approach also allows for innovations among states in enacting electronics recycling legislation. For example, a representative of the Electronics TakeBack Coalition told us that state electronics recycling legislation can be effective in providing convenient collection opportunities and in increasing collection and recycling rates, but that more time is needed to be able to assess the impact of the state programs. The representative described the state programs as laboratories for testing variations in the models on which the programs are based, such as the use of recycling targets in the producer responsibility model, and for allowing the most effective variations to be identified. A continuation of the patchwork of state recycling efforts. While the state-by-state approach may provide states with greater regulatory flexibility, it does not address the concerns of manufacturers and other stakeholders who consider the state-by-state approach to be a significant compliance burden. The compliance burden may actually worsen as more states enact laws (e.g., the number of registration and reporting requirements imposed on manufacturers may increase). One manufacturer pointed out that, while some states have modeled their laws on those in other states, even in such cases, states may make changes to the model in ways that limit any efficiency from the similarities among multiple laws. In addition to creating a compliance burden, the state-by-state approach does not ensure a baseline in terms of promoting the environmentally sound reuse and recycling of used electronics, not only in states without electronics recycling legislation but also in states with legislation. For example, unlike some other state electronics recycling legislation, the Texas law does not require manufacturers to finance the recycling of televisions, which may require a cost incentive for recycling, since the cost of managing the leaded glass from televisions with CRTs may exceed the value of materials recycled from used equipment. Furthermore, the requirements for the environmentally sound management of used electronics vary among states, and state environmental agencies engage in varying levels of oversight to ensure environmentally sound management. For example, according to the state solid waste management official in New Hampshire, budget constraints prevent the state from being able to track what happens to used electronics after they are collected. Various stakeholder efforts are under way to help coordinate state programs and relieve the compliance burden, although some stakeholders have pointed to limitations of such efforts. In particular, in January 2010, a number of state environmental agencies and electronics manufacturers, retailers, and recyclers helped form the Electronics Recycling Coordination Clearinghouse, a forum for coordination and information exchange among the state and local agencies that are implementing electronics recycling laws and all impacted stakeholders. Examples of activities planned under the clearinghouse include collecting and maintaining data on collection volumes and creating a centralized location for receiving and processing manufacturer registrations and reports required under state laws. Other examples of stakeholder efforts to ease the compliance burden include the formation of the Electronic Manufacturers Recycling Management Company, a consortium of manufacturers that collaborate to develop recycling programs in states with electronics recycling legislation. In addition, individual states have made changes to their recycling programs’ legislation after identifying provisions in their laws that created unnecessary burdens for particular stakeholders. For example, Minnesota amended its law to remove the requirement that retailers annually report to each manufacturer the number of the manufacturer’s covered electronic devices sold to households in the state—a requirement that retailers found particularly burdensome. A number of stakeholders, however, including members of the Electronics Recycling Coordination Clearinghouse, have pointed to limitations of stakeholder efforts to coordinate state electronics recycling programs. According to representatives of the Consumer Electronics Association, concerns over federal antitrust provisions limit cooperation among manufacturers for the purpose of facilitating compliance with the state laws. For example, cooperation among manufacturers trying to minimize the cost of compliance would raise concerns among electronics recyclers about price-fixing. Similarly, the executive director of the National Center for Electronics Recycling, which manages the Electronics Recycling Coordination Clearinghouse, told us states are unlikely to make changes to harmonize basic elements of state laws that currently differ, such as the scope of covered electronic devices and the definitions of terms such as “manufacturer.” Under a national strategy based on the establishment of federal standards for state electronics recycling programs, federal legislation would be required. For the purpose of analysis, we assumed that the legislation would establish federal standards and provide for their implementation— for example, through a cooperative federalism approach whereby states could opt to assume responsibility for the standards or leave implementation to EPA, through incentives for states to develop complying programs, or through a combination of these options. Within this alternative, there are many issues that would need to be addressed. A primary issue of concern to many stakeholders is the degree to which the federal government would (1) establish minimum standards, allowing states to adopt stricter standards (thereby providing states with flexibility but also potentially increasing the compliance burden from the standpoint of regulated entities), or (2) establish fixed standards. Further issues include whether federal standards would focus on the elements of state electronics recycling laws that are potentially less controversial and have a likelihood of achieving efficiencies—such as data collection and manufacturer reporting and registration—or would focus on all of the elements, building on lessons learned from the various states. An overriding issue of concern to many stakeholders is the degree to which federal standards would be established as minimum standards, fixed standards, or some combination of the two. In this context, we have assumed that either minimum or fixed standards would, by definition, preempt less stringent state laws and lead to the establishment of programs in states that have not enacted electronics recycling legislation. Minimum standards would be intended to ensure that programs in every state met baseline requirements established by the federal government, while allowing flexibility to states that have enacted legislation meeting the minimum standards to continue with existing programs, some of which are well-established. In contrast, under fixed federal standards, states would not be able to establish standards either stricter or more lenient than the federal standards. Thus, fixed standards would offer relatively little flexibility, although states would still have regulatory authority in areas not covered by the federal standards. As we have previously reported, minimum standards are often designed to provide a baseline in areas such as environmental protection, vehicle safety, and working conditions. For example, a national approach based on minimum standards would be consistent with the authority given to EPA to regulate hazardous waste management under the Resource Conservation and Recovery Act, which allows for state requirements that are more stringent than those imposed by EPA. Such a strategy can be an option when the national objective requires that common minimum standards be in place in every state, but stricter state standards are workable. Conversely, fixed standards are an option when stricter state standards are not workable. For example, to provide national uniformity and thereby facilitate the increased collection and recycling of certain batteries, the Mercury-Containing and Rechargeable Battery Management Act does not allow states the option of establishing more stringent regulations regarding collection, storage, and transportation, although states can adopt and enforce standards for the recycling and disposal of such batteries that are more stringent than existing federal standards under the Resource Conservation and Recovery Act. Most manufacturers we interviewed told us they prefer fixed federal standards over minimum standards. For example, these manufacturers are concerned that many states would opt to exceed the minimum federal standards, leaving manufacturers responsible for complying with differing requirements, not only in the states that have electronics recycling legislation but also in the states currently without legislation. In contrast, most state government officials and environmental groups we interviewed told us that they would prefer minimum federal standards over fixed federal standards as a national approach for the management of used electronics. In addition, a representative of the National Conference of State Legislatures told us that the organization generally opposes federal preemption but accepts that in the area of environmental policy, the federal government often sets minimum standards. According to the representative, even if federal requirements were of a high standard, states may want the option to impose tougher standards if the need arises. Similarly, some legislative and executive branch officials in states with electronics recycling legislation expressed concern that federal standards for electronics recycling would be of a low standard. As a result, the officials said they want to preserve the ability of states to impose more stringent requirements. To help address manufacturer concerns about a continuation of the state- by-state approach under minimum standards, the federal government could encourage states not to exceed those standards. For example, establishing minimum standards that are relatively stringent might reduce the incentive for states to enact or maintain stricter requirements. Consistent with this view, some of the state electronics recycling laws, including those in four of the five states we studied in detail, contain provisions for discontinuing the state program if a federal law takes effect that meets specified conditions (e.g., establishing an equivalent national program). Based on our review of state electronics recycling legislation and discussions with stakeholders regarding a national strategy for the management of used electronics, we identified a range of issues that would need to be considered and could be addressed as part of the establishment of federal standards for state electronics recycling programs, including the following issues: The financing of recycling costs. A potential element in federal standards for state electronics recycling programs would be a mechanism for financing the cost of recycling. For example, representatives of the Consumer Electronics Association told us they support a national approach with a single financing mechanism. Similarly, the California and Washington laws stipulate that their programs be discontinued if a federal law takes effect that establishes a national program, but only if the federal law provides a financing mechanism for the collection and recycling of all electronic devices covered under their respective laws. While there are differences among their views, most stakeholders we interviewed, including some manufacturers, said they would prefer that any federal standards be based on some form of the producer responsibility model rather than on a recycling fee paid by consumers because, for example, they consider the producer responsibility model more efficient to implement in comparison with the resources devoted to collecting a recycling fee and reimbursing recyclers. Even California state government officials, who were generally pleased with what has been accomplished under the state’s recycling fee and payment model, expressed openness to the producer responsibility model. The level of support for producer responsibility represents a shift in the views of some manufacturers. In particular, representatives of the Information Technology Industry Council told us that television manufacturers previously supported a recycling fee paid by consumers because of the frequent turnover of television manufacturers and the problem of assigning recycling costs for legacy equipment whose original manufacturer is no longer in business, no longer makes televisions, or otherwise cannot be determined. According to the council, with only one state having enacted legislation based on a recycling fee, television and other manufacturers now support the producer responsibility model. The allocation of costs and responsibilities among stakeholders. Even under a producer responsibility model, stakeholders other than manufacturers would participate in the implementation of state electronics recycling legislation, and the costs of collecting and recycling used electronics can be assigned in different ways. For example, while they support the producer responsibility model, Information Technology Industry Council representatives have proposed that the model be based on “shared responsibility,” whereby various entities that profit from the sale of electronic devices—including electronics distributors, retailers, and other stakeholders—all contribute to the cost of collection and recycling. In a variation of the concept of shared responsibility, under Maine’s electronics recycling legislation participating local governments generally bear collection costs and manufacturers finance recycling costs. The way in which costs and responsibilities are allocated can also create inequities from the standpoint of certain stakeholders. For example, certain manufacturers may pay more or less than others depending on whether recycling costs are based on the weight of a manufacturer’s own brand of electronics collected for recycling (return share) or on the amount of a manufacturer’s new products sold (market share). Under a return share system, long-standing manufacturers bear a greater proportion of the costs in comparison with newer manufacturers with fewer used electronics in the waste stream. In contrast, a market share system can result in newer manufacturers with a large market share financing the recycling of products produced by their competitors. The division of federal and state responsibilities for implementation and enforcement. Federal standards can be implemented directly by a federal agency, by the states with some degree of federal oversight, or through state implementation in some states and direct federal implementation in others. For example, EPA develops hazardous waste regulations under the Resource Conservation and Recovery Act and has encouraged states to assume primary responsibility for implementation and enforcement through state adoption of the regulations, while EPA retains independent enforcement authority. Regarding used electronics, the division of responsibilities among the federal and state governments would have a direct bearing on EPA’s resource requirements. EPA has previously cautioned that assigning responsibilities to the agency—such as for registration of electronics manufacturers, retailers, and recyclers; collection of registration fees; approval of manufacturer recycling programs; and authorization of parallel state programs for electronics recycling—would be costly and time- consuming to implement. Similarly, a representative of the National Conference of State Legislatures said the organization would oppose any federal requirements that do not provide a source of funding to states for implementing the requirements, and a representative of the National Governors Association pointed out that states not currently having electronics recycling legislation would express concern about the administrative costs of implementing an electronics recycling program. Determination of the scope of covered electronic devices. Stakeholders have cited a variety of criteria for determining the scope of electronic devices covered by state recycling laws. For example, some stakeholders have cited the growing volume of used electronics in comparison with limited landfill capacity or the presence of toxic substances in many electronics. In contrast, other stakeholders have argued that cell phones and other mobile devices, which may contain toxic substances, should not be included with other used electronics (e.g., mobile devices can be easily collected through mail-back programs). As yet another alternative, stakeholders have cited the loss of valuable resources, such as precious metals, when used electronics are disposed in landfills, as well as the environmental benefits of extending the life of used electronics through refurbishment, as a key consideration in electronics recycling legislation. An issue closely related to the scope of covered electronic devices is the scope of entities whose used electronics are covered under programs for financing the cost of recycling. The state electronics recycling laws typically include used electronics from households, but some states also include other entities, such as small businesses and nonprofit organizations that may otherwise need to pay a fee to recycle used electronics in an environmentally sound manner, while California’s law is nontargeted and includes any user of a covered electronic device located within the state. In doing our work, we found that a potential component of either approach that we discuss for managing used electronics is a greater federal regulatory role over exports to (1) facilitate coordination with other countries to reduce the possibility of unsafe recycling or dumping and (2) address the limitations on the authority of states to regulate exports. Assuming a continuation of the factors that contribute to exports, such as a limited domestic infrastructure to recycle used electronics, an increase in collection rates resulting from electronics recycling laws, either at the state or federal level, is likely to lead to a corresponding increase in exports, absent any federal restrictions. While, as we have previously noted, exports can be handled responsibly in countries with effective regulatory regimes and by companies with advanced technologies, some of the increase in exports may end up in countries that lack safe recycling and disposal capacity. Exports of used electronics are subject to a range of state requirements and guidelines in the five states we visited. Nevertheless, many of the state officials we interviewed expressed support for federal action to limit harmful exports because, for example, states lack adequate authority and resources to address concerns about exports. Washington state officials noted that their governor vetoed a provision of the state’s electronic waste legislation that addressed exports of electronics collected under the program because of concerns about the state’s lack of authority to prohibit such exports. The governor instead called for federal legislation prohibiting the export of hazardous waste to countries that are not prepared to manage the waste. In addition, under “preferred standards” established by the state, recyclers can be contractually obligated to ensure that countries legally accept any imports of materials of concern. Washington state officials told us that establishing preferred standards helped the state partially address concerns about used electronics exports, notwithstanding potential limitations on the state’s authority, but that further federal regulation of exports would still be helpful. In our August 2008 report, we made two recommendations to EPA to strengthen the federal role in reducing harmful exports. First, we recommended that EPA consider ways to broaden its regulations under existing Resource Conservation and Recovery Act authority to address the export of used electronic devices that might not be classified as hazardous waste by current U.S. regulations but might threaten human health and the environment when unsafely disassembled overseas. For example, we suggested that EPA consider expanding the scope of the CRT rule to cover other exported used electronics and revising the regulatory definition of hazardous waste. Citing the time and legal complexities involved in broadening its regulations under the Resource Conservation and Recovery Act, EPA disagreed with our recommendation and instead expressed the agency’s support for addressing concerns about exports of used electronics through nonregulatory, voluntary approaches. However, EPA officials told us that the agency is taking another look at its existing authorities to regulate exports of other used electronics. Second, we recommended that the agency submit to Congress a legislative package for ratification of the Basel Convention on the Control of Transboundary Movements of Hazardous Wastes and Their Disposal, a multilateral environmental agreement that aims to protect against the adverse effects resulting from transboundary movements of hazardous waste. Under the convention’s definition, a broader range of materials could be considered potentially hazardous, including some electronic devices. While the Senate provided its advice and consent to ratification in 1992, successive administrations have not submitted draft legislation to Congress giving EPA the necessary statutory authorities to implement the convention’s requirements in order to complete the ratification process. EPA officials explained that these needed additional authorities include, among others, the authority to control the scope of wastes covered by the Basel Convention, the authority to halt exports of hazardous waste if the agency believes they will not be handled in an environmentally sound manner, and the authority to take back shipments that cannot be handled in an environmentally sound manner in the importing country. EPA officials told us that the agency had developed a legislative proposal on more than one occasion under previous administrations but did not finalize any proposal with other federal agencies. According to these officials, finalizing the proposal requires coordination with a number of agencies, including the Department of State and the White House Council on Environmental Quality, which coordinates federal environmental efforts in the development of environmental policies and initiatives. In May 2010, the current EPA Administrator called for legislative changes to address exports and for taking steps toward ratification of the Basel Convention. EPA officials have also cited a number of benefits of ratifying the Basel Convention, such as the ability to fully participate in convention decisions on issues related to the environmentally sound management of used electronics. For example, according to EPA officials, upcoming convention decisions on guidelines for environmentally sound refurbishment and repair will impact parties’ export of used electronics for reuse, which is regarded by refurbishers as environmentally preferable to recycling but also raises concerns about the dumping of used electronics in developing countries. Basel Convention working groups on environmentally sound management are open to a range of participants that do not represent parties to the convention, including EPA, electronics manufacturers, electronics recyclers and refurbishers, and environmental groups. However, given that the United States is a signatory but not a party to the convention, the United States does not participate in the final decisions on issues such as environmentally sound management. EPA officials said they anticipate a number of such decisions in the next few years, especially regarding the transboundary movement of used and end- of-life electronics. According to EPA officials, a greater federal regulatory role over exports resulting from ratification of the Basel Convention would require an increase in EPA’s programmatic and enforcement resources, such as additional staff. The additional resources would be needed to enable the Administrator to determine whether proposed exports will be conducted in an environmentally sound manner and to implement the Basel Convention’s notice-and-consent requirement. Moreover, the European Union’s experience under the waste electrical and electronic equipment directive, which contains an obligation for waste equipment to be treated in ways that avoid environmental harm, demonstrates the need to couple the regulation of exports with enforcement efforts. A European Commission report estimated that 50 percent of waste equipment that is collected is probably not being treated in line with the directive’s objectives and requirements, and that a large volume of waste may be illegally shipped to developing countries, where it is dumped or recycled in ways that are dangerous to human health and the environment. Broad agreement exists among key stakeholders that reusing and recycling electronics in an environmentally sound manner has substantial advantages over disposing of them in landfills or exporting them to developing countries in a manner that threatens human health and the environment. There has been much debate over the best way to promote environmentally sound reuse and recycling, however, and any national approach may entail particular advantages and disadvantages for stakeholders. While empirical information about the experiences of states and other stakeholders in their efforts to manage used electronics can inform this debate, the question of a national approach revolves around policy issues, such as how to balance the need to ensure that recycling occurs nationwide as well as industry’s interests in a uniform, national approach with states’ prerogatives to tailor used electronics management toward their individual needs and preferences. In the end, these larger policy issues are matters for negotiation among the concerned parties and for decision making by Congress and the administration. At the same time, there are a number of beneficial actions that the federal government is already taking that, as currently devised, do not require the effort and implications of new legislation, but rather would complement any of the broader strategies that policymakers might ultimately endorse. In particular, EPA’s collaborative efforts—including Plug-In To eCycling, the R2 practices, EPEAT, and the Federal Electronics Challenge—have demonstrated considerable potential and, in some cases, quantifiable benefits. However, these programs’ achievements have been limited or uncertain, and EPA has not systematically analyzed the programs to determine whether their impact could be augmented. Moreover, EPA has not developed an integrated strategy that articulates how the programs, taken together, can best assist stakeholders to achieve the environmentally responsible management of used electronics. A key issue of national significance to the management of used electronics is how to address exports—an issue that, according to many stakeholders, would most appropriately be addressed at the federal level. EPA has taken useful steps by developing a legislative package for ratification of the Basel Convention, as we recommended in 2008. However, EPA has not yet worked with other agencies, including the State Department and the Council on Environmental Quality, to finalize a proposal for the administration to provide to Congress for review and consideration. While there are unresolved issues regarding the environmentally sound management of used electronics under the Basel Convention, providing Congress with a legislative package for ratification could provide a basis for further deliberation and, perhaps, resolution of such issues. We recommend that the Administrator of EPA undertake an examination of the agency’s partnership programs for the management of used electronics. The analysis should examine how the impacts of such programs can be augmented, and should culminate in an integrated strategy that articulates how the programs, taken together, can best assist stakeholders in achieving the environmentally responsible management of used electronics nationwide. In addition, we recommend that the Administrator of EPA work with other federal agencies, including the State Department and the Council on Environmental Quality, to finalize a legislative proposal that would be needed for ratification of the Basel Convention, with the aim of submitting a package for congressional consideration. We provided a draft of this report to EPA for review and comment. A letter containing EPA’s comments is reproduced in appendix III. EPA agreed with both of our recommendations and also provided additional clarifications and editorial suggestions, which we have incorporated into the report as appropriate. Regarding our recommendation for an examination of the agency’s partnership programs culminating in an integrated strategy for the management of used electronics, EPA stated that the agency plans to gather and analyze input from a variety of stakeholders and to incorporate the input into such a strategy. In addition, while pointing out that the agency’s partnership programs already reflect an integrated approach, in that they address the full life cycle of electronic products, from design through end-of-life management, EPA acknowledged that the programs can and should be augmented and stated that the agency is committed to doing so within the limits of declining resources. In particular, EPA outlined a number of potential efforts to improve the environmental attributes of electronics, increase collection and the appropriate management of used electronics, and better control exports. EPA also stated that the agency is considering the need for new legislative and regulatory authority. We acknowledge EPA’s progress in developing partnership programs to address the full life cycle of electronic products but continue to emphasize the need for a comprehensive, written strategy that addresses how the programs can best promote the environmentally sound management of used electronics. Such a document has the potential to help coordinate the efforts of the many stakeholders associated with the management of used electronics to further promote their environmentally sound reuse and recycling, and to more effectively communicate the strategy to Congress and other decision makers. Regarding our recommendation that EPA work with other federal agencies to finalize a legislative proposal needed to ratify the Basel Convention, EPA commented that the agency has already begun working with the State Department and other federal agencies to do so. EPA added that its previous work in developing such a legislative proposal should enable it to successfully complete this effort. We acknowledge this work but point out that Congress will only have the opportunity to deliberate on a tangible proposal if the effort to achieve consensus on an administration-approved position on the matter is accorded the priority needed. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Administrator of EPA, and other interested parties. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To examine the Environmental Protection Agency’s (EPA) efforts to facilitate the environmentally sound management of used electronics, we reviewed solid and hazardous waste laws and regulations—including the Resource Conservation and Recovery Act and EPA’s rule on the management of cathode-ray tubes (CRT)—and their applicability to used electronics. We specifically reviewed EPA documents describing the agency’s efforts to enforce the CRT rule and to address concerns raised in our August 2008 report on electronic waste exports, including information on the number of EPA investigations of possible violations of the CRT rule. We also examined publicly available information on specific enforcement actions against companies, companies approved to export CRTs for recycling, and companies that have submitted notifications of exports for reuse, and we obtained aggregate information from EPA on its enforcement efforts. To obtain EPA’s views on its efforts, we interviewed officials from the agency’s Office of Enforcement and Compliance Assurance and the Office of Solid Waste and Emergency Response. To examine EPA’s promotion of partnership programs, we interviewed EPA officials responsible for implementing or representing the agency’s position on Plug-In To eCycling, the Responsible Recycling (R2) practices, and the Electronic Product Environmental Assessment Tool (EPEAT). In addition, we interviewed stakeholders concerned about the management of used electronics—including environmental groups; state and local government officials; and electronics manufacturers, retailers, and recyclers—to obtain their views on EPA’s efforts. To examine the views of manufacturers, retailers, recyclers, state and local governments, and other stakeholders on the state-by-state approach to the management of used electronics, we conducted a broad range of interviews. For each category of stakeholders, we conducted interviews with key national-level organizations or associations with a broad perspective on the management of used electronics across the United States and reviewed any related policy positions or reports. To gain further insights, we interviewed individual stakeholders in each category of stakeholders, including state and local government officials and other stakeholders, in five states with electronics recycling legislation that we selected for detailed review—California, Maine, Minnesota, Texas, and Washington. To supplement these detailed reviews, we interviewed state and local government officials in three states without legislation—Arizona, Florida, and New Hampshire. For each interview, we generally discussed the collection and recycling rates for used electronics, the convenience of collection opportunities to consumers, efforts to ensure environmentally sound management, and the impact of the state-by-state approach on implementation of state electronics recycling legislation and on stakeholders’ compliance or enforcement efforts. While recognizing that stakeholders may benefit from state legislation, such as through an increase in business opportunities for electronics recyclers, we specifically asked about the burden (if any) created by the state-by-state approach. For the five states with electronics recycling legislation, we reviewed the laws and related regulations, as well as other documents on the implementation and outcomes of the law, and we visited the states to conduct in-person interviews. We encountered a number of limitations in the availability of reliable data on the impact of the state-by-state approach on various stakeholders. For example, the five states we selected did not have data on collection and recycling rates prior to the effective dates of their laws, which would be useful to quantify the impact of their programs. Similarly, some manufacturers and other stakeholders regulated under state laws had concerns about providing proprietary information or did not identify compliance costs in a way that enabled us to determine the portion of costs that stems from having to comply with differing state requirements. Due to such limitations, we relied predominately on stakeholders’ statements regarding how they have been impacted under the state-by- state approach. Additional information on the stakeholders we interviewed includes the following: State and local government officials. For a national perspective, we interviewed representatives of the Association of State and Territorial Solid Waste Management Officials, the Eastern Regional Conference of the Council of State Governments, the National Conference of State Legislatures, and the National Governors Association. For the five states with electronics recycling legislation we selected for detailed review, we interviewed state legislators or legislative staff involved in enacting the laws, state environmental agency officials responsible for implementing the laws, and local solid waste management officials. We selected the five states to ensure coverage of the two basic models of state electronics recycling legislation, a recycling fee paid by consumers and producer responsibility, as well as the variations of the producer responsibility model. In addition, we selected states with recycling programs that had been in place long enough for stakeholders to provide an assessment of the impacts of the legislation. For the three states without electronics recycling legislation we selected for detailed review, we conducted telephone interviews with state and local solid waste management officials and (in Arizona and New Hampshire) legislators who have introduced legislation or been active in studying options for the management of used electronics. We selected the three states to include ones that, in part, had addressed the management of certain used electronics through other means, such as a ban on landfill disposal or grants for voluntary recycling efforts, and to ensure variety in terms of location and size. Electronics manufacturers. For a broad perspective, we interviewed representatives of two national associations of electronics manufacturers: the Consumer Electronics Association and the Information Technology Industry Council. We also interviewed representatives of a judgmental sample of nine individual manufacturers. We selected manufacturers to interview to include a range of sizes and business models, including manufacturers of information technology equipment and televisions as well as companies that no longer manufacture products covered under state laws but still bear responsibility for recycling costs in some states. In addition to these interviews, we reviewed manufacturers’ policy positions and other documents on the state-by-state approach to managing used electronics or on particular state and local electronics recycling legislation. Electronics retailers. We interviewed representatives of the Consumer Electronics Retailers Coalition, an association of consumer electronics retailers, and of a judgmental sample of four national consumer electronics retailers, including retailers that are also considered manufacturers or collectors under some state electronics recycling legislation. In each of the five states we selected for detailed review, we spoke with representatives from state retail associations, whose members include large national retailers, as well as smaller retailers operating in the five states. We also reviewed available documents pertaining to retailers’ efforts in managing used electronics and their policy positions on the state-by-state approach. Recyclers and refurbishers of used electronics. For a broad perspective from the electronics recycling industry, we interviewed a representative of the Institute of Scrap Recycling Industries, many of whose members are involved in the recycling of used electronics. In addition, for the perspective of refurbishers, we conducted an interview with TechSoup, a nonprofit organization that has established a partnership with Microsoft to increase the number of personal computers available to nonprofits, schools, and low-income families across the globe by reducing the cost of software to refurbishers. We also interviewed representatives of a judgmental sample of recyclers and refurbishers encompassing a variety of sizes and business models, including large corporations operating in multiple states as well as nonprofit organizations or smaller entities operating in a single state. In particular, in each of the five states with electronics recycling legislation we selected for detailed review, we interviewed at least one recycler operating under the state program and one refurbisher. Environmental and other nonprofit organizations. We interviewed representatives of environmental and other nonprofit organizations that have an interest in the issue of the management of used electronics, including the Basel Action Network, Consumers Union, Electronics TakeBack Coalition, Product Stewardship Institute, and Silicon Valley Toxics Coalition. In addition, in the five states with electronics recycling legislation we selected for detailed review, we interviewed representatives of state environmental organizations that advocated for the state legislation or have been active in tracking the implementation of the laws. For each of the environmental and nonprofit organizations interviewed, we reviewed available documents pertaining to their advocacy work and their views on the state-by-state approach or particular state electronics recycling legislation. To examine the implications of alternative national strategies to further promote the environmentally sound management of used electronics, we reviewed relevant existing laws relating to solid and hazardous waste management (the Resource Conservation and Recovery Act and the Mercury-Containing and Rechargeable Battery Management Act). In addition, we examined state laws establishing electronics recycling programs or addressing the management of used electronics through other means, such as a ban on landfill disposal, to identify components of the laws that might be addressed under a national approach. We also examined the European Union’s directive on waste electrical and electronic equipment and electronics recycling in Canada as examples of how used electronics are managed internationally. As part of our interviews with national-level organizations or associations of stakeholders, as well as with individual stakeholders, we discussed stakeholder efforts to coordinate state electronics recycling programs and stakeholders’ policy positions on a national strategy, including their views on the components of a national strategy, such as a mechanism for financing the cost of recycling. Regarding alternative strategies specifically relating to exports of used electronics, we examined ways that state electronics recycling programs we selected for detailed review had addressed the issue, and we interviewed stakeholders regarding current state and EPA efforts to limit potentially harmful exports. We also reviewed EPA documents and interviewed EPA officials regarding the statutory changes necessary for the United States to ratify the Basel Convention on the Control of Transboundary Movements of Hazardous Wastes and Their Disposal, as well as the implications of ratification on the agency’s ability to exercise greater oversight over the export of used electronics for reuse or recycling. Finally, we reviewed EPA’s technical assistance comments on a congressional concept paper proposing a framework for establishing a national electronics recycling program. We conducted this performance audit from May 2009 to July 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The five states with electronics recycling laws that we selected for detailed review—California, Maine, Minnesota, Texas, and Washington— illustrate a range of ways of addressing elements and issues common to the management of used electronics. For each of the states, we describe three key elements we identified as establishing the framework for their recycling programs: (1) the mechanism for financing the cost of collecting and recycling used electronics, (2) the mechanism for providing for the convenient collection of used electronics, and (3) requirements for the environmentally sound management of used electronics collected under the programs and the state’s enforcement of the requirements. In addition, because the state electronics recycling programs are relatively new, we describe developments and program changes designed to address issues encountered during the initial implementation of the programs. California’s electronics recycling law established a funding mechanism to provide for the collection and recycling of certain video display devices that have a screen greater than 4 inches, measured diagonally, and that are identified by the state Department of Toxic Substances Control as a hazardous waste when discarded. According to state officials, the state’s list of covered devices currently includes computer monitors, laptop computers, portable DVD players, and most televisions. California is the only state identified as having an electronics recycling law that established a system to finance collection and recycling costs through a recycling fee paid by consumers. Effective on January 1, 2005, retailers began collecting the fee at the time of purchase of certain video display devices. The fee currently ranges from $8 to $25, depending on screen size. Retailers remit the fees to the state, and they may retain 3 percent as reimbursement for costs associated with collection of the fee. The state, in turn, uses the fees to reimburse collectors and recyclers of covered electronic devices as well as for administering and educating the public about the program. Entities must be approved by the state to be eligible to receive collection and recycling payments. There were about 600 approved collectors and 60 approved recyclers as of October 2009. To determine the amount paid per pound, the state periodically updates information concerning the net costs of collection and recycling and adjusts the statewide payment rates. To assist the state in this effort, approved collectors and recyclers are required to submit annual reports on their net collection and recycling costs for the prior year. As of May 2010, the combined statewide rate for collection and recycling was $0.39 per pound. The administration of the program is shared by three state agencies. The State Board of Equalization is responsible for collecting the fee from and auditing retailers. The Department of Resources Recycling and Recovery (CalRecycle) has overall responsibility for administering collection and recycling payments. Specific duties of CalRecycle include establishing the collection and recycling payment schedule to cover the net costs of authorized collectors and recyclers; approving applications to become an approved collector or recycler; reviewing recycling payment claims for the appropriate collection, transfer, and processing documentation and making payments; and addressing any identified fraud in payment claims. Under the law, CalRecycle is also responsible for reviewing the fee paid by consumers at least once every 2 years and adjusting the fee to ensure sufficient revenues to fund the recycling program. The third agency, the Department of Toxic Substances Control, is responsible for determining whether a video display device, when discarded or disposed of, is presumed to be a hazardous waste under the state health and safety code and, therefore, is a covered electronic device under the electronics recycling legislation. In addition, the department regulates the management of used electronics and conducts annual inspections of recyclers to ensure compliance with applicable laws and regulations. One of the purposes of the California law was to establish a program that is “cost free and convenient” for consumers to return and recycle used electronics generated in the state. To this end, the law directs the state to establish a payment schedule that covers the net cost for authorized collectors to operate a free and convenient system for collection, consolidation, and transportation. State and local government officials, as well as other state stakeholders we interviewed, told us the law has resulted in convenient collection opportunities. For example, a representative of the state’s Regional Council of Rural Counties said that, while it does not require counties to provide collection opportunities, the law had resulted in convenient collection in rural counties. Similarly, according to Sacramento County solid waste management officials, the law has made it profitable for the private sector to collect and recycle used electronics and thereby has freed up county resources to pay for media campaigns to inform the public about the law and to offer curbside collection. Recyclers approved under the state’s payment system for the recycling of covered electronic devices must be inspected at least once annually by the Department of Toxic Substances Control and be found in conformance with the department’s regulations to maintain their approval. The department’s regulations restrict certain recycling activities—such as using water, chemicals, or external heat to disassemble electronic devices—and specify requirements in a variety of other areas, including training of personnel, record-keeping, and the labeling of containers. In addition, to be eligible for a claim within the payment system, covered devices must be dismantled in California and the residuals generally must be sent to appropriate recycling facilities. Hence, the program does not pay claims for any covered devices that are exported intact. The state’s electronics recycling legislation also requires that exporters notify the department and demonstrate that the covered electronic waste or covered electronic devices are being exported for the purposes of recycling or disposal; that the importation of the waste or device is not prohibited by an applicable law in the country of destination; and that the waste or device will be managed only at facilities whose operations meet certain standards for environmentally sound management. (These demonstrations are not required for exports of a component part of a covered electronic device that is exported to an authorized collector or recycler and that is reused or recycled into a new electronic component.) According to a department official responsible for implementing the regulations, the state’s ability to withhold payment for the recycling of covered electronic devices is an effective tool for promoting compliance with the regulations. However, the official also said that the state lacks the authority to regulate exports (e.g., exports of CRT glass containing lead for processing in Mexico, which, according to the official, does not have regulations equivalent to those in California). Key developments since the initiation of California’s program in 2005 include the following adjustments to the recycling fee paid by consumers and to the payment schedule for collection and recycling: Effective January 2009, CalRecycle increased the recycling fee from an initial range of $6 to $10 to the current range of $8 to $25. As described in the CalRecycle’s January 2008 update on the program, a continued growth in the volume of recycling payment claims resulted in the pace of payments exceeding the flow of revenue generated by the fee. CalRecycle adjusted the fee to avoid exhausting the fund used to pay for the collection and recycling of used electronics. In 2008, CalRecycle decreased the payment schedule for combined collection and recycling. The initial rate was $0.48 per pound, based in part on a provisional rate established by the law, and the current rate is $0.39 per pound. According to CalRecycle officials, the initial payment schedule was artificially high, which benefited the program by fostering a recycling infrastructure in the state. CalRecycle adjusted the payment schedule on the basis of an analysis of the net cost reports submitted by collectors and recyclers. Maine’s electronics recycling program began in 2006 and finances the cost of recycling televisions, computers, computer monitors, digital picture frames, printers, and video game consoles from households. Maine’s law is based on the concept of “shared responsibility,” whereby participating municipalities generally bear the costs associated with collection and manufacturers finance handling and recycling costs associated with managing certain used electronics generated by households. Participating municipalities arrange for these used electronics to be transported to state-approved consolidators, which count and weigh information technology products by brand and manufacturer and determine the total weight of televisions and video game consoles. Consolidators who are also recyclers may then further process the used electronics; otherwise, they send the material to recycling facilities. In either case, consolidators generally invoice individual manufacturers for their handling, transportation, and recycling costs. The state approves each consolidator’s fee schedule, currently set at a maximum of $0.48 per pound for combined recovery and recycling, for use when invoicing manufacturers. For information technology products, the amount invoiced is based on the weight of the manufacturer’s own brand of electronics collected under the program (return share) plus a proportional share of products for which the manufacturer cannot be identified or is no longer in business (orphan share). In contrast, for manufacturers of televisions and video game consoles with a national market share that exceeds a certain minimum threshold, the amount invoiced is calculated as the total weight collected multiplied by the proportion of the manufacturer’s national market share of sales for those products (recycling share). Initially, Maine’s law only used return share as a basis for determining the financial responsibility of all manufacturers. The state amended the law in 2009 to base the financial responsibility of television manufacturers (as well as video game consoles) on market share. The Maine Department of Environmental Protection had recommended this change in part to address the issue of the relatively long lifespan of televisions and the concern among long-standing television manufacturers that, under the return share system, new market entrants do not bear recycling costs and can therefore offer their products at a lower price and possibly even go out of business before their products enter the waste stream. The Department of Environmental Protection has overall responsibility for the electronics recycling program. The department’s responsibilities include approving consolidators as well as the fee schedule used by consolidators in charging manufacturers, determining the orphan share for manufacturers of information technology products, and determining the recycling share for manufacturers of televisions and video game consoles on the basis of national sales data. In addition, the department is responsible for enforcing the compliance of manufacturers whose products are sold in the state. Finally, the department notifies retailers of noncompliant manufacturers (retailers are prohibited from selling products of such manufacturers). One of the purposes of Maine’s law is to establish a recycling system that is convenient and minimizes the cost to consumers of electronic products and components. In addition, manufacturers are responsible for paying the reasonable operational costs of consolidators, including the costs associated with ensuring that consolidation facilities are geographically located to conveniently serve all areas of the state as determined by the Department of Environmental Protection. To establish convenient collection opportunities for households, Maine’s program relies on the state’s existing municipal waste collection infrastructure and provides an incentive to municipalities to participate by giving them access to essentially free recycling of certain covered electronics. The law allows participating municipalities to collect used electronics at a local or regional waste transfer station or recycling facility or through other means, such as curbside pickup. According to a 2007 survey supported by the department, most municipalities provide permanent collection sites. About half of the municipalities that responded to the survey reported that they charge end-of-life fees for accepting used electronics from households to offset the costs associated with collection. However, local solid waste management officials we interviewed also told us that the program implemented under the law enabled municipalities to reduce or eliminate fees. For example, the Portland solid waste manager said that the program enabled the city to stop charging residents a fee, which was approximately $20 per television or computer monitor prior to the law. Notably, Maine law now prohibits the disposal of CRTs in landfills and other solid waste disposal facilities. Maine’s law requires that recyclers provide to consolidators a sworn certification that they meet guidelines for environmentally sound management published by the Department of Environmental Protection. Among other things, the guidelines stipulate that recyclers comply with federal, state, and local laws and regulations relevant to the handling, processing, refurbishment, and recycling of used electronics; implement training and other measures to safeguard occupational and environmental health and safety; and comply with federal and international law and agreements regarding the export of used products or materials. Other guidelines specific to exports include a requirement that televisions and computer monitors destined for reuse include only whole products that have been tested and certified as being in working order or as requiring only minor repair, and where the recipient has verified a market for the sale or donation of the equipment. The Department of Environmental Protection official in charge of the program told us she has visited the facilities that recycle used electronics collected under Maine’s program, but that the department lacks the resources and auditing expertise to ensure adherence to the guidelines as well as the authority to audit out-of-state recyclers. Since Maine initiated its electronics recycling program, the state made a number of changes to the law, and the Department of Environmental Protection has suggested additional changes. Such changes include the following: Scope of covered electronic devices. In 2009, Maine added several products, including digital picture frames and printers, to the scope of covered devices. In its 2008 report on the recycling program, the Department of Environmental Protection had recommended adding digital picture frames and printers for a number of reasons, including the growing volume of such equipment in the waste stream. In its 2010 report, the department also recommended the program be expanded to include used electronics generated by small businesses, thereby increasing the volume of used electronics collected, providing for more efficient transportation from collection sites, and providing for a greater volume to recyclers as a means to drive down the per-pound cost of recycling. Program administration. Beginning in July 2010, manufacturers of covered devices sold in the state are required to pay an annual registration fee of $3,000 to offset the state’s administrative costs associated with the program. In its January 2010 report, the Department of Environmental Protection recommended that the state legislature consider eliminating or reducing the fee for certain manufacturers, such as small television manufacturers. According to the report, an exemption from paying the fee would provide relief to manufacturers that no longer sell or have not sold significant quantities of covered devices in the state. Recycling costs. In its January 2010 report, the Department of Environmental Protection noted that, while direct comparisons between differing state programs are difficult, recycling costs are higher in Maine than in other states with electronics recycling laws. Representatives of both the Consumer Electronics Association and the Information Technology Industry Council also told us that recycling costs in Maine are higher because the state selects consolidators and approves the fee schedule used by each of the consolidators to invoice manufacturers, thereby limiting competition. To address such concerns, the department stated its intent to take a number of administrative actions. For example, the department plans to streamline the permitting process for facilities that process used electronics and thereby encourage the growth of recycling facilities in the state and reduce the handling and shipping costs for used electronics, much of which is currently processed out of state. The department also plans to examine ways to increase the competitiveness of the cost approval process for consolidators or price limits that can be imposed without compromising the level of service currently afforded to municipalities. Minnesota initiated its program in 2007 to finance the recycling of certain used electronics from households. Manufacturers of video display devices (televisions, computer monitors, and laptop computers) with a screen size that is greater than 9 inches, measured diagonally, that are sold in the state are responsible for recycling, including costs, and can also meet their obligations by financing the recycling of printers, keyboards, DVD players, and certain other electronics. Minnesota’s law establishes recycling targets for manufacturers selling video display devices in the state. The targets are set at an amount of used electronics equal to 80 percent of the weight of video display devices sold to households during the year. (The target was 60 percent for the first program year.) Manufacturers that exceed their targets earn recycling credits that can be used to meet their targets in subsequent years or sold to other manufacturers. Conversely, manufacturers that fail to meet their targets pay recycling fees on the basis of how close they are toward meeting their obligation. State officials told us the recycling program is based primarily on market economics and does not require significant government involvement. In particular, the state does not set the prices paid for recycling, and manufacturers have flexibility in selecting collectors and recyclers to work with. Recyclers seek to be reimbursed for their costs by marketing and selling recycling pounds to manufacturers. According to several stakeholders we interviewed about the state’s program, this market-based approach has contributed to lowering recycling costs in the state. The Minnesota Pollution Control Agency has primary responsibility for administering the program. The agency’s responsibilities include reviewing registrations submitted by manufacturers for completeness; maintaining registrations submitted by collectors and recyclers; and conducting educational outreach efforts regarding the program. The state department of revenue reviews manufacturers’ annual registration fees and reports and, among other things, collects data needed to support manufacturers’ fee determinations. The state uses registration fees to cover the cost of implementing the program, which may include awarding grants to entities that provide collection and recycling services. The Minnesota Pollution Control Agency has requested proposals to provide grants for collection and recycling outside of the Minneapolis-St. Paul metropolitan area and expects to award several grants in 2010. Minnesota’s law does not stipulate criteria for the establishment of a statewide collection infrastructure or mandate that any entity serve as a collector, but rather relies on the reimbursement from manufacturers to create an incentive for the establishment of collection opportunities. To foster the availability of collection opportunities outside of the Minneapolis-St. Paul metropolitan area, the law allows 1½ times the weight of covered electronic devices collected outside of the metropolitan area to count toward manufacturers’ recycling targets. Local solid waste management officials we interviewed described the impact of the state’s electronics recycling legislation on the convenience of collection opportunities as dependent upon whether a county already had an established recycling program for used electronics, with a greater impact in counties that did not already have recycling programs. Minnesota’s law prohibits the commercial use of prison labor to recycle video display devices and requires that recyclers abide by relevant federal, state, and local regulations and carry liability insurance for environmental releases, accidents, and other emergencies. The law does not establish additional requirements for environmentally sound management. In addition, Minnesota Pollution Control Agency officials said that they have limited resources to ensure that used electronics are managed responsibly, particularly when equipment is shipped out of state, and that enforcement efforts are largely based on self-policing by recyclers and spot checks of larger recyclers. Two recyclers in the state with whom we spoke said that a lack of oversight of recyclers by state authorities had contributed to undercutting by irresponsible recyclers. Minnesota Pollution Control Agency officials said they are seeking to promote certification programs, such as R2 or e-Stewards®, for electronics recyclers operating in the state. Minnesota amended its law in 2009 to make the following changes: The state amended the law to remove the requirement that retailers annually report to each video display device manufacturer the number of the manufacturer’s brand of video display devices sold to households during the previous year. Manufacturers submitted this information to the state, which used it to determine manufacturers’ recycling targets. A representative of the Minnesota Retailers Association said that retailers found this requirement to be a burden. Similarly, according to the Consumer Electronics Retailers Coalition, the state’s reporting requirement imposed a high cost on retailers and increased the risk of the disclosure of proprietary sales data. Minnesota now uses either manufacturer-provided data or national sales data, prorated to the state’s population, to determine manufacturers’ obligations. The state further amended the law to limit the use of recycling credits. Minnesota Pollution Control Agency officials told us this amendment was intended to address a “boom and bust” scenario, whereby manufacturers financed the recycling of large amounts of used electronics in the first program year and accumulated carry-over credits, which they used to meet their recycling targets during the second year. The use of credits left local governments and electronics recyclers responsible for the cost of collecting and recycling used electronics that exceeded manufacturers’ recycling targets. As a result, according to local solid waste management officials we interviewed, some counties reintroduced end-of-life fees and saw an increase in the illegal dumping of used electronics. To address such issues and ensure that a majority of targets are met by the recycling of newly collected material, the amended law limits the portion of a manufacturer’s target that can be met through carry-over credits to 25 percent. Prior to the amendment, the law did not limit the use of recycling credits. Since the implementation of Minnesota’s program, several other states, including Illinois and Wisconsin, have incorporated the use of recycling targets into electronics recycling legislation. Several stakeholders told us they prefer targets as they are designed in the Illinois program. For example, a representative of one electronics manufacturer said he expects that manufacturers will have difficulty in meeting their targets in Minnesota in upcoming years after recyclers have worked through the backlog of used electronics stored in consumers’ homes prior to implementation of the state’s law. In contrast, under the Illinois program, manufacturers’ targets are based in part on the total amount recycled or reused during the prior year, such that the targets may be adjusted downward if the amounts collected decrease. Similarly, several refurbishers of used electronics pointed out that Minnesota’s law does not allow the refurbishment of covered electronic devices to count toward manufacturers’ recycling targets and thereby, according to some stakeholders, may create an incentive to recycle equipment that has been collected but is in working condition or can be refurbished. In contrast, under Illinois’ law, the weight of covered electronic devices processed for reuse is doubled when determining whether a manufacturer has met its recycling and reuse target, and the weight is tripled if the refurbished equipment is donated to a public school or nonprofit entity. Texas’ computer equipment recycling program began in 2008 and requires manufacturers to provide opportunities for free collection of desktop and laptop computers, monitors not containing a tuner, and accompanying mice and keyboards from consumers in the state. Consumers are defined as individuals who use computer equipment purchased primarily for personal or home-business use. Texas’ computer equipment recycling law is based on the concept of “individual producer responsibility,” whereby manufacturers of computer equipment are responsible for implementing a recovery plan for collecting their own brand of used equipment from consumers. The state’s program requires that each manufacturer submit its plan to the state and annually report the weight of computer equipment collected, recycled, and reused. The law does not authorize manufacturer registration fees, and manufacturers are free to select the recyclers with whom they work and negotiate recycling rates to be paid. The Texas Commission on Environmental Quality has the primary responsibility for enforcing the law. The commission’s responsibilities include providing information on the Internet about manufacturers’ recovery plans; educating consumers regarding the collection, recycling, and reuse of computer equipment; helping to ensure that electronics retailers do not sell the equipment of manufacturers without recovery plans; and annually compiling information submitted by manufacturers and issuing a report to the state legislature. According to commission officials, manufacturers not paying registration fees has not caused a financial burden because the commission already had the expertise and outreach capabilities needed to implement the law. The Texas law requires that the collection of computer equipment be reasonably convenient and available to consumers in the state. In addition, manufacturers’ recovery plans must enable consumers to recycle computer equipment without paying a separate fee at the time of recycling. The law allows manufacturers to fulfill these requirements by offering a system for returning computer equipment by mail, establishing a physical collection site, or organizing a collection event or by offering some combination of these or other options. According to Texas Commission on Environmental Quality officials, most manufacturers have opted to offer a mail-back program, and one manufacturer noted that the mail-back programs may be more convenient for rural residents of the state than a physical collection point. Some manufacturers have provided additional collection options. For example, in addition to providing a mail-back option, Dell has partnered with affiliates of Goodwill Industries in the state to establish a physical collection infrastructure. The local solid waste management officials we interviewed regarding the state’s computer equipment recycling law were critical of the impact of the law on providing collection opportunities and relieving local governments of the burden of managing used electronics. These officials attributed the law’s lack of impact to a number of factors, including the inconvenience to consumers of manufacturers’ mail-back programs; insufficient education of consumers about recycling opportunities by manufacturers, the Texas Commission on Environmental Quality, or local governments; and manufacturers having responsibility only for the cost of recycling computer equipment collected directly from consumers, not for that collected by local governments (e.g., when consumers may be unaware of the opportunities for free recycling). As a result, while they are not required to collect used computer equipment, local governments bear the costs for the equipment they collect. For example, the solid waste coordinator for one regional council of governments said that the council continues to provide grants to local governments for the management of used electronics. The Texas electronics recycling law requires that computer equipment collected under the law be recycled or reused in a manner that complies with federal, state, and local law. In addition, the law directed the Texas Commission on Environmental Quality to adopt standards for the management of used electronics developed by the Institute for Scrap Recycling Industries, which represents electronics recyclers, or to adopt such standards from a comparable organization. Among other things, the standards adopted by the commission require that recyclers prioritize refurbishment over recycling and recycling over disposal, ensure that computer equipment is stored and processed in a manner that minimizes the potential release of any hazardous substance into the environment, and have a written plan for responding to and reporting pollutant releases. Manufacturers are required to certify that recyclers have followed the standards in recycling the manufacturers’ computer equipment. Texas Commission on Environmental Quality officials said that, under the commission’s risk-based approach to enforcement of environmental regulations, they had not prioritized regular, scheduled enforcement of the requirements for the environmentally sound management of used computer equipment collected under the state’s program. They said that they would follow up on any allegations of noncompliance with the requirements, but that they had not received any such complaints. Several recyclers in the state confirmed that there had been minimal oversight of recyclers by the commission and said that manufacturers play a more active role than the commission in ensuring that the recyclers with whom they contract adhere to requirements for environmentally sound management. In 2009, the Texas state legislature passed a bill that would have required that television manufacturers collect and recycle an amount of televisions on the basis of manufacturers’ market share of equipment sold in the state. However, the bill was vetoed by the governor, who stated that it was significantly different than the law covering computer equipment—for example, in that the bill would impose fees on television manufacturers and recyclers. Local solid waste management officials we interviewed, as well as a state environmental group that focuses on used electronics, were critical of the governor’s veto. For example, according to the environmental group, the bill would have relieved local governments of the costs associated with managing used televisions, and without a law establishing a recycling program, televisions will continue to be disposed of in landfills, which is not prohibited in Texas. Washington’s electronics recycling law was passed in 2006, and the program began full operation in 2009. The program covers the costs associated with collecting, transporting, and processing desktop and laptop computers, computer monitors, and televisions generated by households, charities, school districts, small businesses with fewer than 50 employees, and small governments (cities with a population of fewer than 50,000, counties with a population fewer than 125,000, and special purpose districts). Under Washington’s law, manufacturers are required to finance the collection, transportation, and recycling of certain used electronics. The law allows manufacturers to meet this requirement by implementing an independent, state-approved collection and recycling plan or by participating in the default “standard plan.” In addition, the law requires that individual manufacturers register with the Department of Ecology, the state agency responsible for administering the law, and pay a fee to cover the department’s administrative costs. The fees are based on a sliding scale linked to a manufacturer’s annual sales of covered electronic products in the state. The specific responsibilities of the department include reviewing the standard plan as well as any independent plans submitted by manufacturers for the department’s approval; establishing an annual process for local governments and local communities to report their satisfaction with the services provided by the plans; registering manufacturers, collectors, transporters, and processors for the program; and enforcing the law (e.g., by issuing warnings and penalties against manufacturers selling covered products in the state if they are not participating in an approved plan). The standard plan is implemented by the Washington Materials Management and Financing Authority, a public body created by the state’s law. All manufacturers are required to be members of the authority and the standard plan, or they can opt out of the standard plan by gaining the state’s approval for their own independent plan. Currently, all manufacturers affected by the state’s law meet their requirements through participation in the standard plan. The Washington Materials Management and Financing Authority assesses individual manufacturers for collection and recycling costs, as well as the authority’s administrative costs, on the basis of a combination of market share and return share, with the return share being based on an annual sampling of used electronics collected under the state’s program. The authority uses the assessments paid by manufacturers to reimburse individual collectors, transporters, and recyclers at rates negotiated with the authority. According to the director of the authority, the combined rate for the collection, transportation, and recycling of used electronics, as well as administrative costs, was $0.24 per pound in 2009. A number of stakeholders noted that the authority has the ability to negotiate relatively low prices, in comparison with some other state electronics recycling programs, due to the authority’s purchasing power over electronics recycling services in the state. Washington’s electronics recycling law includes a number of specific requirements for the establishment of a convenient collection network throughout the state, in both urban and rural areas. In particular, the law requires that each plan provide collection service in every county and every city or town with a population greater than 10,000. Collection sites may include electronics recyclers and repair shops, recyclers of other commodities, reuse organizations, charities, retailers, government recycling sites, or other locations. Plans may limit the number of used electronics accepted per customer per day or per delivery at a collection site or service but are also required to provide free processing of large quantities of used electronics generated by small businesses, small governments, charities, and school districts. Local solid waste management officials told us the law has had a positive impact on promoting the collection of used electronics in the state. One of these officials also said that the law’s implementation has eliminated the cost burden on local government for managing used electronics. In contrast, representatives of several manufacturers, as well as the Consumer Electronics Association, told us that the law’s requirements for convenience are too prescriptive and have served as an impediment for manufacturers to obtain approval for their independent plans. Along these lines, in 2009, the Department of Ecology rejected two independent plans submitted by manufacturers because the department concluded that the plans did not meet the law’s convenience criteria. Department officials told us they expect the plans to be resubmitted and approved once the manufacturers submitting the plans demonstrated that they would be able to meet the convenience criteria. The Department of Ecology established both minimum standards and voluntary “preferred” standards for the environmentally sound management of used electronics. Among other things, the minimum standards require that recyclers implement an environmental, health, and safety management system; remove any parts that contain materials of concern, such as devices containing mercury, prior to mechanical or thermal processing and handle them in a manner consistent with the regulatory requirements that apply to the items; and not use prison labor for the recycling of used electronics. The department encourages recyclers to conform to the preferred standards and identifies recyclers that do so on its Web site. In addition, the Washington Materials Management and Financing Authority made the preferred standards a requirement for all recyclers with whom the authority contracts under the standard plan. Among other things, the preferred standards stipulate that recyclers use only downstream vendors that adhere to both the minimum and voluntary standards with respect to materials of concern; ensure that recipient countries legally accept exports of materials of concern; and, as with the minimum standards, undergo an annual audit of the recycler’s conformance with the standards. Department of Ecology officials said that the authority’s requirement that recyclers achieve preferred status had enabled the authority to achieve more than what the state could legally require, particularly regarding exports. Washington amended its law in 2009 to authorize collectors in receipt of fully functioning computers to sell or donate them as whole products for reuse. The amendment requires that collectors not include computers gleaned for reuse when seeking compensation under a standard or independent plan. In addition, when taking parts from computers submitted for compensation (i.e., for recycling) to repair other computers for reuse, collectors must make a part-for-part exchange with the nonfunctioning computers submitted for compensation. According to Department of Ecology officials, the provisions pertaining to reuse in both the department’s original regulations and the amendment are intended to prevent collectors from stripping valuable components from used electronics for export to markets with poor environmental standards, and sending only the scrap with no value to the recyclers used by a standard or independent plan. Similarly, a Washington refurbisher told us that the requirement for a part-for-part exchange when repairing equipment is intended to address the concern that collectors might export valuable components pulled out of equipment and receive a higher rate of compensation than by submitting the equipment to a recycler. According to the refurbisher, the amendment has improved the impact of Washington’s law on the ability to refurbish and reuse equipment but has also resulted in unnecessary work to reinstall components into equipment sent for recycling. In addition to the contact named above, Steve Elstein, Assistant Director; Elizabeth Beardsley; Mark Braza; Joseph Cook; Edward Leslie; Nelson Olhero; Alison O’Neill; and Tim Persons, Chief Scientist, made key contributions to this report. | Low recycling rates for used televisions, computers, and other electronics result in the loss of valuable resources, and electronic waste exports risk harming human health and the environment in countries that lack safe recycling and disposal capacity. The Environmental Protection Agency (EPA) regulates the management of used electronics that qualify as hazardous waste and promotes voluntary efforts among electronics manufacturers, recyclers, and other stakeholders. However, in the absence of a comprehensive national approach, a growing number of states have enacted electronics recycling laws, raising concerns about a patchwork of state requirements. In this context, GAO examined (1) EPA's efforts to facilitate environmentally sound used electronics management, (2) the views of various stakeholders on the state-by-state approach, and (3) considerations to further promote environmentally sound management. GAO reviewed EPA documents, interviewed EPA officials, and interviewed stakeholders in five states with electronics recycling legislation. EPA's efforts to facilitate the environmentally sound management of used electronics consist largely of (1) enforcing its rule for the recycling and exporting of cathode-ray tubes (CRT), which contain significant quantities of lead, and (2) an array of partnership programs that encourage voluntary efforts among manufacturers and other stakeholders. EPA has improved enforcement of export provisions of its CRT rule, but issues related to exports remain. In particular, EPA does not specifically regulate the export of many other electronic devices, such as cell phones, which typically are not within the regulatory definition of hazardous waste despite containing some toxic substances. In addition, the impact of EPA's partnership programs is limited or uncertain, and EPA has not systematically analyzed the programs to determine how their impact could be augmented. The views of stakeholders on the state-by-state approach to managing used electronics have been shaped by the increasing number of states with electronics recycling legislation. To varying degrees, the entities typically regulated under the state laws--electronics manufacturers, retailers, and recyclers--consider the increasing number of state laws to be a compliance burden. In contrast, in the five states GAO visited, state and local solid waste management officials expressed overall support for states taking a lead role in the absence of a national approach. The officials attributed their varying levels of satisfaction more to the design and implementation of individual state recycling programs, rather than to the state-by-state approach. Options to further promote the environmentally sound management of used electronics involve a number of policy considerations and encompass many variations, which generally range from a continued reliance on state recycling programs to the establishment of federal standards via legislation. The first approach provides the greatest degree of flexibility to states, but does not address stakeholder concerns that the state-by-state approach is a compliance burden or will leave some states without electronics recycling programs. Moreover, EPA does not have a plan for coordinating its efforts with state recycling programs or articulating how EPA's partnership programs can best assist stakeholders to achieve the environmentally sound management of used electronics. Under the second approach, a primary policy issue is the degree to which federal standards would allow for stricter state standards, thereby providing states with flexibility but also potentially worsening the compliance burden from the standpoint of regulated entities. As a component of any approach, a greater federal regulatory role over exports could address limitations on the authority of states to regulate exports. GAO previously recommended that EPA submit to Congress a legislative proposal for ratification of the Basel Convention, a multilateral environmental agreement that aims to protect against the adverse effects resulting from transboundary movements of hazardous waste. EPA officials told GAO that the agency had developed a legislative proposal under previous administrations but had not finalized a proposal with other federal agencies. GAO recommends that the Administrator, EPA, (1) examine how EPA's partnership programs could be improved to contribute more effectively to used electronics management and (2) work with other federal agencies to finalize a legislative proposal on ratification of the Basel Convention for congressional consideration. EPA agreed with the recommendations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
In 2004, DOJ estimated that American Indians experience rates of violent crime that are far higher than most other racial and ethnic groups in the United States. For example, DOJ estimated that across the United States, the annual average violent crime rate among American Indians was twice as high as that of African Americans, and 2-½ times as high as that of whites, and 4-½ times as high as that of Asians. Also, domestic and sexual violence against American Indian women is among the most critical public safety challenges in Indian country, where, in some tribal communities, according to a study commissioned by DOJ, American Indian women face murder rates that are more than 10 times the national average. Oftentimes, alcohol and drug use play a significant role in violent crimes in Indian country. According to DOJ, American Indian victims reported alcohol use by 62 percent of offenders compared to 42 percent for all races. Tribal or BIA law enforcement officers are often among the first responders to crimes on Indian reservations; however, law enforcement resources are scarce. BIA estimates that there are less than 3,000 tribal and BIA law enforcement officers to patrol more than 56 million acres of Indian country. According to a DOJ study, the ratio of law enforcement officers to residents in Indian country is far less than in non-tribal areas. In the study, researchers estimated that there are fewer than 2 officers per 1,000 residents in Indian country compared to a range of 3.9 to 6.6 officers per 1,000 residents in non-tribal areas such as Detroit, Michigan and Washington, D.C. The challenge of limited law enforcement resources is exacerbated by the geographic isolation or vast size of many reservations. In some instances officers may need to travel hundreds of miles to reach a crime scene. For example, the Pine Ridge Indian Reservation in South Dakota has about 88 sworn tribal officers to serve 47,000 residents across 3,466 square miles, which equates to a ratio of 1 officer per 39 square miles of land, according to BIA. In total there are 565 federally recognized tribes; each has unique public safety challenges based on different cultures, economic conditions, and geographic location, among other factors. These factors make it challenging to implement a uniform solution to address the public safety challenges confronting Indian country. Nonetheless, tribal justice systems are considered to be the most appropriate institutions for maintaining law and order in Indian country. Generally, tribal courts have adopted federal and state court models; however, tribal courts also strive to maintain traditional systems of adjudication such as peacemaking or sentencing circles. Law enforcement, courts, and detention/correction programs are key components of the tribal justice system that is intended to protect tribal communities; however, each part of the system faces varied challenges in Indian country. Shortcomings and successes in one area may exacerbate problems in another area. For example, a law enforcement initiative designed to increase police presence on a reservation could result in increased arrests, thereby overwhelming a tribal court’s caseload or an overcrowded detention facility. The exercise of criminal jurisdiction in Indian country depends on several factors, including the nature of the crime, the status of the alleged offender and victim—that is, whether they are Indian or not—and whether jurisdiction has been conferred on a particular entity by, for example, federal treaty or statute. As a general principle, the federal government recognizes Indian tribes as “distinct, independent political communities” that possess powers of self-government to regulate their “internal and social relations,” which includes enacting substantive law over internal matters and enforcing that law in their own forums. The federal government, however, has plenary and exclusive authority to regulate or modify the powers of self-government that tribes otherwise possess, and has exercised this authority to establish an intricate web of jurisdiction over crime in Indian country. The General Crimes Act, the Major Crimes Act, and Public Law 280, which are broadly summarized in table 1, are the three federal laws central to the exercise of criminal jurisdiction in Indian country. These laws as well as provisions of the Indian Civil Rights Act related to tribal prosecutions are discussed more fully in appendix II. The exercise of criminal jurisdiction by state governments in Indian country is generally limited to two instances, both predicated on the offense occurring within the borders of the state—where both the alleged offender and victim are non-Indian, or where a federal statute confers, or authorizes, a state to assume criminal jurisdiction over Indians in Indian country. Otherwise, only the federal and tribal governments have jurisdiction. Where both parties to the crime are Indian, the tribe generally has exclusive jurisdiction for misdemeanor-level offenses, but its jurisdiction runs concurrent with the federal government for felony-level offenses. Where the alleged offender is Indian but the victim is non-Indian, tribal and federal jurisdiction is generally concurrent. Finally, federal jurisdiction is exclusive where the alleged offender is non-Indian and the victim is Indian. Table 2 summarizes aspects of federal, state, and tribal jurisdiction over crimes committed in Indian country. DOI is one of two key federal agencies that have a responsibility to provide public safety in Indian country. Within DOI, BIA is assigned responsibility to support tribes in their efforts to ensure public safety and administer justice within their reservations as well as to provide related services directly or through contracts, grants, or compacts to 565 federally recognized tribes with a service population of about 1.6 million Indians across the United States. To that end, BIA’s Office of Justice Services manages law enforcement, detention, and tribal court programs. Specifically, within BIA’s Office of Justice Services, the Division of Law Enforcement supports 191 tribal law enforcement agencies and the Division of Corrections supports 91 tribal detention programs. About 90 BIA special agents are responsible for investigating crimes that involve violations of federal and tribal law that are committed in Indian country including crimes such as murder, manslaughter, child sexual abuse, burglary, and production, sale, or distribution of illegal drugs, among other criminal offenses. Following completion of an investigation, BIA special agents will refer the investigation to the USAO for prosecution. BIA reported that it distributed approximately $260 million of its fiscal year 2010 appropriation among tribal law enforcement and detention programs. Additionally, BIA reported that it funded maintenance and repair projects at four tribal detention centers totaling $6.5 million from amounts appropriated under the American Recovery and Reinvestment Act of 2009 (Recovery Act). Within BIA’s Office of Justice Services, the Division of Tribal Justice Support for Courts works with tribes to establish and maintain tribal judicial systems. This includes conducting assessments of tribal courts and providing training and technical assistance on a range of topics including establishing or updating law and order codes and implementing strategies to collect and track caseload data. BIA reported that it distributed $24.5 million to support tribal court initiatives in fiscal year 2010. Figure 1 depicts the key DOI entities and their respective responsibilities related to supporting tribal justice systems. DOJ also plays a significant role in helping tribes maintain law and order in Indian country and DOJ officials have stated that the department has a duty to help tribes confront the dire public safety challenges in tribal communities. Within DOJ, responsibility for supporting tribal justice systems falls to multiple components, including the FBI, which investigates crimes; the U.S. Attorneys’ Offices, which prosecute crimes in Indian country; and the Office of Justice Programs, which provides grant funding, training, and technical assistance to federally recognized tribes to enhance the capacity of tribal courts, among other tribal justice programs. Figure 2 depicts the key DOJ entities and their respective responsibilities related to supporting tribal justice systems. The FBI works with tribal and BIA police and BIA criminal investigators to investigate crime in Indian country. Currently, the FBI dedicates more than 100 FBI special agents from approximately 16 field offices to investigate cases on over 200 reservations, nationwide. According to the FBI, its role varies from reservation to reservation, but generally the agency investigates crimes such as murder, child sexual abuse, violent assaults, and drug trafficking, among other criminal offenses. FBI officials explained that approximately 75 percent of the crimes it investigates in Indian country include death investigations, physical and sexual abuse of a child, and violent felony assaults such as domestic violence and rape. Similar to BIA criminal investigators, FBI special agents refer criminal investigations to the USAO for prosecution; however, FBI officials explained that FBI agents may elect not to refer investigations that, pursuant to supervisory review, lack sufficient evidence of a federal crime or sufficient evidence for prosecution. Under the direction of the Attorney General, the USAO may prosecute crimes in Indian country where federal jurisdiction exists. Of the 94 judicial districts located throughout the United States and its territories, 44 districts contain Indian country. According to DOJ, approximately 25 percent of all violent crime cases opened each year by district USAOs nationwide occur in Indian country. In 2010, DOJ named public safety in Indian country as a top priority for the department. To that end, in January 2010, each USAO with Indian country jurisdiction was directed to develop operational plans that outline the efforts the office will take to address public safety challenges facing tribes within its district—particularly violence against women. The Bureau of Justice Assistance (BJA) within OJP is one of several DOJ components that provide grant funding, training, and technical assistance designed to enhance and support tribal government’s efforts to reduce crime and improve the function of criminal justice in Indian country. For example, BJA awards grant funding to tribes for the planning, construction, and renovation of correctional facilities. In fiscal year 2010, BJA awarded 25 grants to tribes totaling about $9 million to support tribal correctional facilities. Further, in fiscal year 2010, BJA awarded $220 million in grant funding provided through the Recovery Act for 20 construction and renovation projects at correctional facilities on tribal lands. Additionally, BJA administers the Tribal Courts Assistance Program—a grant program—which is intended to help federally recognized tribes develop and enhance the operation of tribal justice systems which may include activities such as training tribal justice staff, planning new or enhancing existing programs such as peacemaking circles and wellness courts and supporting alternative dispute resolution methods. In fiscal year 2010, BJA awarded 48 grants totaling $17 million to tribes to establish new or enhance existing tribal court functions. In its role as a policy and legal advisor regarding Indian country matters within DOJ, the Office of Tribal Justice facilitates coordination among DOJ components working on Indian issues. Additionally, the office functions as the primary point of contact for tribal governments. All 12 tribes we visited reported challenges that have made it difficult for them to adjudicate crime in Indian country including: (1) limitations on criminal jurisdiction and sentencing authority, (2) delays in receiving timely notification about the status of investigations and prosecutions from federal entities, (3) lack of adequate detention space for offenders convicted in tribal court, (4) perceived encroachment upon judicial independence by other branches of the tribal government, and (5) limited resources for day-to-day court operations. Various ongoing and planned federal efforts exist to help tribes effectively adjudicate crimes within their jurisdiction. For example, TLOA, which was enacted in July 2010, attempts to clarify roles and responsibilities, increase coordination and communication, and empower tribes with the authorities necessary to reduce the prevalence of crime in Indian country. Tribal courts only have jurisdiction to prosecute crimes committed by Indian offenders in Indian country, and their ability to effectively promote public safety and justice is curtailed by their limited sentencing authority and jurisdiction. As a result, even where tribal jurisdiction exists, tribes will often rely on the federal government to investigate and prosecute more serious offenses, such as homicide and felony-level assault, because a successful federal prosecution could result in a lengthier sentence and better ensure justice for victims of crime in Indian country. First, federal law limits the general sentencing authority of tribal courts to a maximum term of imprisonment not to exceed 1 year per offense. Officials from 6 of the 12 tribes we visited told us that the 1-year limit on prison sentences did not serve as an effective deterrent against criminal activity and may have contributed to the high levels of crime and repeat offenders in Indian country. Second, tribes do not have any jurisdiction to prosecute non- Indian criminal offenders in Indian country including those who commit crimes of domestic violence, assault, and murder. Therefore, tribes must rely on the USAO to prosecute non-Indian offenders. For example, in instances where a non-Indian abuses an Indian spouse, the tribe does not have the jurisdiction to prosecute the offender, and unless the USAO prosecutes the case, the non-Indian offender will not be prosecuted for the domestic violence offense. The rate at which non-Indians commit crime on the reservations we visited is unclear as the tribes were not able to provide related crime data. Officials from 6 of the tribes we visited noted that non-Indians may be more likely to commit crimes in Indian country because they are aware that tribes lack criminal jurisdiction over non-Indians and that their criminal activity may not draw the attention of federal prosecutors. For example, an official from a South Dakota tribe that we visited told us that the tribe has experienced problems with MS-13 and Mexican Mafia gangs who commit illegal activities such as distribution or sale of illegal drugs on the reservation because, as the official explained, they presume that federal prosecutors may be more inclined to focus their resources on higher-volume drug cases. Further, in 2006, the U.S. Attorney for the Wyoming district testified about a specific instance where a Mexican drug trafficker devised a business plan to sell methamphetamine at several Indian reservations in Nebraska, Wyoming, and South Dakota that first began with developing relationships with American Indian women on these reservations who would then help to recruit customers. According to a special agent involved in the case, the drug trafficker established drug trafficking operations to exploit jurisdictional loopholes believing that he could operate with impunity. According to a tribal justice official from a New Mexico pueblo, small-scale drug trafficking operations in Indian country can have an equally devastating effect on tribes as the effects of large-scale operations in large cities; therefore, if the federal government does not respond to small-scale operations in Indian country, the success of such operations may contribute to the sense of lawlessness in Indian country. When we asked tribes that we visited about how they decide to prosecute serious crimes over which they do have jurisdiction, 9 of the 12 tribes we visited noted that they may exercise concurrent jurisdiction and prosecute those crimes in tribal court. Some officials reported they would rather preserve their tribe’s limited resources, recognizing that sentences considered more commensurate with the crime may only result from federal prosecution. Nonetheless, 5 of the 12 tribes we visited in Arizona, New Mexico, North Dakota, and South Dakota perceive that the district USAOs decline to prosecute the majority of Indian country matters that are referred to them. Officials from the tribes we visited expressed concerns about the rate at which USAOs decline to prosecute Indian country crimes and noted that a high number of declinations sends a signal to crime victims and criminals that there is no justice or accountability. In December 2010, we reported that approximately 10,000 Indian country criminal matters were referred to USAOs from fiscal year 2005 through 2009. During that period, USAOs declined to prosecute 50 percent of the approximately 9,000 matters that they resolved, while they had not yet decided whether to prosecute or decline the remaining 1,000 matters. For criminal matters referred to USAOs, “weak or insufficient admissible evidence” followed by “no federal offense evident” were among the most frequently cited reasons associated with declinations based on available data in DOJ’s case management system, Legal Information Office Network System. Eight of the twelve tribes we visited stated that they rely on the federal government to investigate and prosecute serious crimes; however, officials from the tribes we visited reported that their tribe had experienced difficulties in obtaining information from federal entities about the status of criminal investigations. For example: Officials from 5 of the 12 tribes we visited told us that oftentimes they did not know whether criminal investigators—most commonly, BIA or FBI—had referred the criminal investigation to the USAO for prosecution. Officials from the tribes we visited expressed concern about the lack of timely notification from local USAOs about decisions to prosecute a criminal investigation. Tribal justice officials from 4 of the 12 tribes we visited noted that they have to initiate contact with their district USAOs to get information about criminal matters being considered for prosecution and that only upon request will the USAO provide verbal or written notification of the matters they decline to prosecute; however, little detail is provided about the reasons for the declination. We examined a declination letter that was sent to one of the tribes we visited and found that the letter stated that the matter was being referred back to the tribe for prosecution in tribal court, but no additional information was provided about the reason for the declination decision. The Chief Prosecutor from one of the pueblos we visited noted that it can be difficult for the USAO to share details about a criminal matter for fear that doing so may violate confidentiality agreements or impair prosecutors’ ability to successfully prosecute should the investigation be reopened at a later date. However, according to tribal officials, it is helpful to understand the reason for declining to prosecute a criminal matter so that tribal prosecutors can better determine whether to expend its resources to prosecute the matter in tribal court. Officials from 6 of the 12 tribes we visited told us that when criminal matters are declined, federal entities generally do not share evidence and other pertinent information that will allow the tribe to build its case for prosecution in tribal court. This can be especially challenging for prosecuting offenses such as sexual assault where DNA evidence collected cannot be replicated should the tribe conduct its own investigation following notification of a declination, according to officials. When the federal government decides not to pursue a prosecution, a tribe may decide to prosecute such a case provided that any tribal statute of limitations has not expired. Officials from 6 of the 12 tribes that we visited noted that it is not uncommon for the tribe to receive notification of USAO declination letters after the tribe’s statute of limitations has expired, which, ranges from 1 to 3 years. In addition to affecting the tribe’s ability to administer justice in a timely manner— that is, before the statute of limitations expires—officials also noted that the absence of investigation or declination information makes it difficult for tribal justice officials to successfully prosecute a criminal matter in tribal court and assure crime victims that every effort is being made to prosecute the offender. Officials from 6 of the 12 tribes we visited reported that they do not have adequate detention space to house offenders convicted in tribal courts and may face overcrowding at tribal detention facilities. Similarly, BIA and DOJ have acknowledged that detention space in Indian country is inadequate. One of the New Mexico pueblos we visited noted that the detention facility has a maximum capacity of 43 inmates; however, as of October 2010, there are more than 90 inmates imprisoned at the facility. In some instances, tribal courts are forced to make difficult decisions such as (1) foregoing sentencing a convicted offender to prison, (2) releasing inmates to make room for another offender who is considered to be a greater danger to the community, and (3) contracting with state or tribal detention facilities to house convicted offenders, which can be costly. According to an official from one of the New Mexico pueblos we visited, at times, when the pueblo has reached its detention capacity—up to three inmates—the pueblo has had to forego sentencing convicted juvenile or adult offenders to prison because using a nearby tribal facility to house its inmates would pose an economic hardship for the pueblo. Also, of the 12 tribes we visited, 5 noted that using detention facilities at another location is not always a viable option for housing offenders. Housing offenders in another entities’ detention facility can be costly for the tribe who has to pay to transport inmates between the tribal court of jurisdiction and detention facility for arraignments, trial, and other appearances. Generally, the tribes we visited have incorporated practices that help to foster and maintain judicial independence—that is, the ability of the tribal courts to function without any undue political or ideological influence from the tribal government. Various factors such as a tribe’s approach to removing judges and intervening on behalf of tribal members during an ongoing criminal matter could affect internal and external perceptions of a tribal court’s independence. The manner in which some tribes remove judges serves as an example of the tribe’s efforts to foster and maintain judicial independence. For example, at 11 of the 12 tribes we visited, a tribal judge can only be removed from office for cause following a majority vote by the Tribal Council. In another instance, the Chief Judge at one of the tribes we visited explained that tribal members will often approach the Tribal Council to intervene when members are not satisfied with the tribal court’s decision. The Tribal Council subsequently issued several reminders to tribal members that unsatisfied parties to a criminal matter can appeal the trial court’s decisions in the tribe’s appellate court. Decisions of this tribe’s appellate court; however, are final and not subject to review by the Tribal Council, thereby upholding and preserving the decisions and independence of the tribal court. The constitution for 4 of the 12 tribes we visited, stated that, upon appointment, judges’ salaries cannot be reduced while serving in office, thereby helping to protect the independence of the judiciary. Additionally, officials from the tribes we visited reported that certain activities may undermine a tribal court’s independence. For example, officials from 5 of the 12 tribes we visited noted that the tribal court is viewed as a tribal program by tribal members rather than as a separate and autonomous branch of government. For example, according to officials at one of the tribes we visited, the constitution was amended in 2008 to articulate the independence of the tribal court from the legislative and executive branches of the tribal government. However, according to the officials from this tribe, Tribal Council members continue to approach criminal court judges to inquire about the status of ongoing cases and Tribal Council members have intervened on behalf of tribal members to discuss reversing the court’s decisions on certain criminal matters. Such actions potentially add to the perception that the court is not autonomous and is subject to the rule of the executive or legislative branch, which, in turn can threaten the integrity of the tribal judiciary and create the perception of unfairness. Figure 3 shows a sign at a tribal court designed to serve as a measure to prevent people from engaging in ex parte communications. Additionally, the manner in which tribal governments distribute federal funding to tribal courts may limit courts’ control of their budgets. According to a BIA official and judges from one of the tribes we visited, the placement of the tribal court within the tribe’s overall budget structure—that is, not separate from other tribal programs that BIA funds—could contribute to the perception that the tribal court has little to no autonomy and separation from other tribal programs. Officials at the 12 tribes we visited told us they face various resource limitations resulting in reliance on federal funding, staffing shortages, and limited capacity to conduct jury trials. Tribes We Visited Reported They Rely on Federal Funding to Operate Tribal Courts Regardless of Their Size or Economic Condition. We found that all of the 12 tribes we visited rely fully or partially on federal funding to operate their court systems regardless of the size of the population the tribal court serves, its geographic location, or economic conditions. For example, one of the tribes we visited relies on federal funding for aspects of its court system even though federal funding generally accounts for less than 10 percent of the court system’s total budget, according to a senior tribal court official. This official explained that federal funding is barely sufficient to pay salaries for positions such as court clerks. Generally, of the 12 tribes we visited, the tribal government provided partial funding to 10 of the tribal courts; the remaining 2 were solely funded by federal funding. For further information about the funding levels for each of the 12 tribes we visited, see appendix III. Further, officials at 11 of the 12 tribes we visited noted that their tribal courts’ budgets are inadequate to properly carry out the duties of the court; therefore, the tribes often have to make tradeoffs, which may include not hiring key staff such as probation officers or providing key services such as alcohol treatment programs. According to BIA, historically, federal funding for tribal courts has been less than what tribes deemed necessary to meet the needs of their judicial systems. While tribal courts we visited collect a range of fees and fines, which can be an additional source of operating revenue, 6 of the 12 tribes noted that the fees and fines the court collects are to be returned to the tribal government’s general fund rather than retained for use by the tribal court. Where possible, to help fill the courts’ budget shortfalls, officials at 3 of the 12 tribes we visited told us that they have sought funding from other sources such as state grants or partnered with other tribal programs to provide treatment services for parties appearing before their courts. According to Tribes We Visited, Lack of Funding Affects Tribal Courts’ Ability to Maintain Adequate Staffing Levels and Provide Training to Court Personnel. Officials at 7 of the 12 tribes we visited told us that their tribal courts are understaffed and that funding is often insufficient to employ personnel in key positions such as public defenders, prosecutors, and probation officers, among other positions. Additionally, officials at three of the New Mexico pueblos we visited told us that law enforcement officers also served as prosecutors despite not being trained in the practice of law and not having sufficient training to serve as prosecutors. The Chief Judges at two of the New Mexico pueblos told us that the pueblos do not have any other alternatives due to the lack of funding. For further information about the staffing levels at each of the 12 tribes we visited, see appendix III. Tribal justice officials also stated that their tribal courts face various challenges in recruiting and retaining qualified judicial personnel including: (1) inability to pay competitive salaries, (2) housing shortages on the reservation, and (3) rural and remote geographic location of the reservation, among other things. For example, a tribal justice official from one of the South Dakota tribes we visited noted that the tribe is often forced to go outside its member population to hire judges and attorneys because tribal members often lack education beyond the eighth grade; however, the tribe often faces difficulties in paying competitive salaries to hire legally trained non-Indians who often command salaries that are higher than the tribe can afford. Additionally, tribal justice officials noted that while some tribal members do pursue higher education, they do not often always return to work in tribal communities, thereby creating a shortage in available talent to draw from within the tribe’s community. Further, officials from two of the tribes we visited noted that they may not be able to attract qualified applicants because of the rural location. Even if tribes overcome recruitment challenges, tribal justice officials noted that they may also face difficulties in retaining personnel—particularly, non-Indians—because these candidates’ marketability often increases after gaining experience in Indian country and they are able to pursue opportunities that meet their compensation and quality-of-life needs such as higher salaries and improved housing. Four of the twelve tribes we visited noted that the courts often use DOJ grant funds to pay salaries for various positions without the benefit of a sustainable funding source once the grant funds expire. For example, one of the South Dakota tribes we visited used grant funds to hire a compliance officer, probation officer, and process server to focus exclusively on domestic violence cases, which were occurring at a high rate on the reservation. Officials explained that they saw a decrease in reported cases of domestic violence during this time; however, once the grant funds expired, they were no longer able to maintain these positions and perceived an increase in domestic violence cases. Additionally, lack of funding hinders tribes’ abilities to provide personnel with training opportunities to obtain new or enhance existing skills. For example, at one of the North Dakota tribes we visited, court personnel explained that court clerks needed training to enhance their knowledge of scheduling court proceedings, developing case and records management systems, and familiarizing themselves with criminal procedures, among other things. Additionally, because of the increases in the number of cases involving illegal drugs, one of the judges we met with also expressed a need for training to effectively manage criminal proceedings that involve the use of methamphetamines. In particular, 8 of the 12 tribes we visited noted that they face difficulties in acquiring funds to register personnel for training as well as to pay for related expenses such as mileage reimbursement or other transportation costs, lodging, and per diem. The Chief Judge from one of the tribes we visited noted that the tribe has been able to acquire scholarships from various training providers to help absorb full or partial costs for certain training. Further, training providers such as the National Judicial College have begun to provide web-based training which, according to officials, is more cost-effective. Tribes We Visited Reported Having Limited Capacity to Conduct Jury Trials. Upon request, any defendant in tribal court accused of an offense punishable by imprisonment is entitled to a trial by jury of not less than six persons. However, officials from 7 of the 12 tribes we visited reported that their tribal courts have limited capacity to conduct jury trials due to limited courtroom space, funding, and transportation. For example, the courtroom for one of the New Mexico pueblos that we visited does not have adequate space to seat a six-person jury and, according to officials, there is not another facility that can be used to set up a jury box. Additionally, tribal officials at 2 of the 12 tribes we visited stated that their courts lack funding to pay tribal members a per diem for jury duty. Additionally, potential jurors’ lack of access to personal or public transportation can hinder the courts’ ability to seat a jury. For example, officials from two of the Arizona tribes we visited explained that there is no public transportation on the reservations, and consequently it is difficult for tribal members without access to personal transportation to travel to court. Various federal efforts exist that could help to address some of the challenges that tribes face in effectively adjudicating crime in Indian country. For example, TLOA: (1) authorizes tribal courts to impose a term of imprisonment on certain convicted defendant in excess of 1 year; (2) authorizes and encourages USAOs to appoint Special Assistant U.S. Attorneys (SAUSA), including the appointment of tribal prosecutors to assist in prosecuting federal offenses committed in Indian country; (3) requires that federal entities coordinate with appropriate tribal law enforcement and justice officials on the status of criminal investigations terminated without referral or declined prosecution; and (4) requires BOP to establish a pilot program to house, in federal prison, Indian offenders convicted of a violent crime in tribal court and sentenced to 2 or more years imprisonment. Additionally, to help address issues regarding judicial independence, BIA has ongoing and planned training to help increase tribes’ awareness about the significance of judicial independence. Many of these initiatives directly resulted from the enactment of TLOA in July 2010; and at this time, these initiatives are in the early stages of implementation. As a result, it is too early to tell the extent to which these initiatives are helping to address the challenges that tribes face in effectively adjudicating crime in Indian country. Various federal efforts are underway that provide additional resources to assist tribes in the investigation and prosecution of crime in Indian country including (1) additional federal prosecutors, (2) authorizing tribal courts to impose longer prison sentences on certain convicted defendants, (3) mandating changes to the program that authorizes BIA to enter into agreements to aid in law enforcement in Indian country, and (4) affording tribal prosecutors opportunities to become Special Assistant U.S. Attorneys to assist in prosecuting federal offenses committed in Indian country. First, to help address the high levels of violent crime in Indian country, in May 2010, DOJ announced the addition of 30 Assistant U.S. Attorneys (AUSA) to serve as tribal liaisons in 21 USAO district offices that contain Indian country including the four states that we visited as part of our work—Arizona, New Mexico, North Dakota, and South Dakota. According to DOJ, these additional resources will help the department work with its tribal law enforcement partners to improve public safety in Indian country. DOJ also allocated 3 additional AUSAs to help support its Community Prosecution Pilot Project which it launched at two of the tribes we visited—the portion of Navajo Nation within New Mexico and the Oglala Sioux Tribe in South Dakota. Under this pilot project, the AUSAs will be assigned to work at their designated reservation on a regular basis and will work in collaboration with the tribe to develop strategies that are tailored to meet the public safety challenges facing the tribe. Second, TLOA authorizes tribal courts to imprison convicted offenders for up to a maximum of 3 years if the defendant has been previously convicted of the same or a comparable crime in any jurisdiction (including tribal) within the United States or is being prosecuted for an offense comparable to an offense that would be punishable by more than 1 year if prosecuted in state or federal court. To impose an enhanced sentence, the defendant must be afforded the right to effective assistance of counsel and, if indigent, the assistance of a licensed attorney at the tribe’s expense; a licensed judge with sufficient legal training must preside over the proceeding; prior to charging the defendant, the tribal government criminal laws and rules of evidence and criminal procedure must be made publicly available; and the tribal court must maintain a record of the criminal proceedings. Generally, tribal justice officials from 9 of the 12 the tribes we visited stated that they welcome the new sentencing authority, but officials from 2 of the tribes noted that they would likely use the new authority on a case-by-case basis because they lacked the infrastructure to fully meet the requisite conditions. For example, the Chief Judge from one of the New Mexico pueblos we visited noted that rather than hiring a full-time public defender, the pueblo is considering hiring an attorney on contract to be used on a case-by-case basis when the enhanced sentencing authority may be exercised. Third, TLOA mandates changes to the Special Law Enforcement Commission (SLEC) program which authorizes BIA to enter into agreements for the use of personnel or facilities of federal, tribal, state, or other government agencies to aid in the enforcement of federal or, with the tribe’s consent, tribal law in Indian country. Specifically, within 180 days of enactment, the Secretary of the Interior shall develop a plan to enhance the certification and provision of special law enforcement commissions to tribal law enforcement officials, among others, that includes regional training sessions held at least biannually in Indian country to educate and certify candidates for the SLEC. The Secretary of the Interior, in consultation with tribes and tribal law enforcement agencies, must also develop minimum requirements to be included in SLEC agreements. Under the SLEC program, administered by the BIA, tribal police may be deputized as federal law enforcement officers, which affords them the authorities and protections available to federal law enforcement officers. According to BIA, given the potential difficulties arresting officers face in determining whether a victim or offender is an Indian or not or whether the alleged crime has occurred in Indian country (for purposes of determining jurisdiction at the time of arrest) a tribal officer deputized to enforce federal law is not charged with determining the appropriate jurisdiction for filing charges; rather this is to be determined by the prosecutor or court to which the arresting officer delivers the offender. Lastly, among other provisions, TLOA explicitly authorizes and encourages the appointment of qualified attorneys, including tribal prosecutors, as Special Assistant U.S. Attorneys (SAUSA) to assist in the prosecution of federal offenses and administration of justice in Indian country. If appointed as a SAUSA, a tribal prosecutor may pursue in federal court an Indian country criminal matter with federal jurisdiction that, if successful, could result in the convicted defendant receiving a sentence greater than if the matter had been prosecuted in tribal court. According to the Associate Attorney General, many tribal prosecutors have valuable experience and expertise that DOJ can draw on to prosecute crime and enforce federal criminal law in Indian country. Further, tribal prosecutors at 4 of the 12 tribes we visited are in varying stages of obtaining SAUSA credentials. The Chief Prosecutor at a New Mexico pueblo who is in the process of obtaining a SAUSA credential cited various benefits arising from a SAUSA appointment including increased: (1) prosecution of criminal cases that involve domestic violence and child sexual abuse; (2) prosecution of misdemeanor-level offenses committed by non-Indians against Indians that occur in Indian country; (3) ability to directly present criminal investigations to the district USAO rather than solely relying on BIA criminal investigators to do so; and (4) cooperation from tribal crime victims and witnesses who may be more forthcoming with someone closely affiliated with the pueblo rather than federal investigators or prosecutors, thereby helping to facilitate a more successful investigation and prosecution of a federal crime. TLOA provides that federal investigators and prosecutors must coordinate with tribes to communicate the status of investigations and prosecutions relating to alleged criminal offenses in Indian country crimes. More specifically, if a federal entity terminates an investigation, or if a USAO declines to prosecute or terminates a prosecution of an alleged violation of federal criminal law in Indian country, they must coordinate with the appropriate tribal officials regarding the status of the investigation and the use of evidence relevant to the case in a tribal court with authority over the crime alleged. Individually and collectively, these requirements could better enable tribes to prosecute criminal matters in tribal court within their statute of limitations. Although TLOA does not prescribe how coordination is to occur between federal entities—such as FBI and BIA criminal investigators—and tribes, DOJ directed relevant USAOs to work with tribes to establish protocols for coordinating with tribes. For example, the USAO for the District of Arizona, in consultation with Arizona tribes, has established protocols to guide its coordination with tribes. Specifically, within 30 days of a referral of a criminal investigation for prosecution, the Arizona district USAO plans to notify the relevant tribe in writing if the office is declining to prosecute the matter. Officials from one of the New Mexico pueblos we visited explained that they would like to have an entrance conference with the USAO for the District of New Mexico on each criminal investigation that is referred to the USAO for which the tribe has concurrent jurisdiction and an exit conference to discuss the USAO reasons for declining to prosecute the crime. Tribal officials explained that the exit conference could serve to educate the tribe about what it can do to better prepare an investigation for referral to the USAO. According to DOJ, each USAO and FBI field office will make efforts to reach agreements with tribes in their jurisdiction about communicating the status of investigation and prosecutions based on the unique needs of the tribe. Pursuant to TLOA, on November 26, 2010, the Bureau of Prisons (BOP) launched a 4-year pilot program to house at the federal government’s expense up to 100 Indian offenders convicted of violent crimes in tribal courts and sentenced to terms of imprisonment of 2 or more years. DOJ considers the pilot program to be an important step in addressing violent offenders and underresourced correctional facilities in Indian country. BOP’s goal is to reduce future criminal activity of Indian offenders by providing them with access to a range of programs such as vocational training and substance abuse treatment programs that are designed to help offenders successfully reenter their communities following release from prison. It is unlikely that 5 of the 12 tribes we visited will immediately begin participating in the pilot because they are not yet positioned to fully meet the conditions that are required to imprison Indian offenders convicted in tribal court for two or more years. Additionally tribal officials expressed concern about placing convicted Indian offenders in federal prison because tribal members would likely oppose having tribal members sent to locations that are not in close proximity to the reservation, making it difficult for family members to visit and ensure the convicted Indian offender is able to maintain a connection with the tribal community—a key aspect of tribes’ culture and values. While tribes expressed concern about the placement of tribal members in federal prison, officials from 2 of the tribes we visited stated that access to federal programs such as substance abuse and mental health treatment programs and job training would be a major benefit that offenders would likely not have access to while imprisoned in tribal detention facilities. More broadly, TLOA requires that BIA, in coordination with DOJ and in consultation with tribal leaders, law enforcement and correctional officers, submit a long-term plan to address incarceration in Indian country to Congress by July 29, 2011. The long-term plan should also describe proposed activities for constructing, operating, and maintaining juvenile and adult detention facilities in Indian country and construction of federal detention facilities in Indian country, contracting with state and local detention centers upon the tribe’s approval, and alternatives to incarceration developed in cooperation with tribal court systems. BIA and DOJ officials noted that they have begun to conduct consultations with tribal entities to address incarceration in Indian country. BIA has taken steps to help increase awareness about the importance and significance of judicial independence in tribal communities. For example, officials from one of the tribes we visited told us that, at the request of the tribal court, the BIA Superintendent is to conduct a workshop for tribal leaders and community members to, among other things, provide instruction on how interference with the tribal court’s decisions can threaten the judiciary’s ability to provide equitable adjudication of crimes. Further, BIA’s Division of Tribal Justice Support for Courts has conducted similar workshops in the past and expects to do so again in fiscal year 2011. According to BIA and DOJ officials, the two agencies have begun to establish interagency coordinating bodies intended to facilitate the agencies’ efforts to coordinate on tribal court and detention initiatives. Officials noted that because Indian country issues are a top priority across the federal government, federal departments and agencies are focused on ensuring that, where appropriate, they work together to address the needs of Indian tribes. For example, when DOI and DOJ developed tribal consultation plans for their respective agencies in 2010, the two agencies cited interagency coordination as a key element to meeting the tribes’ needs. According to DOJ, interagency coordination is essential to holding stakeholders accountable and achieving success. Similarly, DOI acknowledged the importance of collaborating and coordinating with its federal partners regarding issues that affect tribes. BIA and DOJ officials told us that communication between the two agencies has increased and their staff now know whom to call about various tribal justice issues, which they commented is a significant improvement over prior years when there was little to no communication. For example, DOJ has begun to consult BIA about its future plans to fund the construction of tribal correctional facilities, which has helped to resolve past inefficiencies. BIA officials told us that they need to know which tribes DOJ plans to award grants to construct correctional facilities at least 2 years in advance so that they can plan their budget and operational plans accordingly in order to fulfill their obligation to staff, operate, and maintain detention facilities. According to BIA, there have been instances where they were unaware of DOJ’s plans to award grant funds to tribes to construct tribal detention facilities, which could result in new facilities remaining vacant until BIA is able to secure funding to operate the facility. DOJ has implemented a process whereby when tribes apply for DOJ grants to construct correctional facilities, DOJ consults BIA about each applicant’s needs as BIA typically has firsthand knowledge about tribes’ needs for a correctional facility and whether the tribe has the infrastructure to support a correctional facility, among other things. BIA then prioritizes the list of applicants based on its knowledge of the detention needs of the tribes. DOJ officials noted that the decision about which tribes to award grants to rests solely with them; however, they do weigh BIA’s input about the tribes’ needs for and capacity to utilize a correctional facility when making grant award decisions. To help BIA anticipate future operations and maintenance costs for new tribal correctional facilities, each year DOJ’s Bureau of Justice Assistance (BJA) provides BIA with a list of planned correctional facilities that includes the site location, size, and completion date. BIA officials noted that this level of coordination with DOJ is an improvement over past years as it helps to facilitate planning and ensure they are prepared to assume responsibility to staff, operate, and maintain tribal detention facilities. BIA and BJA also serve on a governmentwide coordinating body, the Planning Alternatives and Correctional Institutions for Indian Country Advisory Committee, which brings together federal stakeholders who play a role in planning detention and correctional programs and facilities in Indian country. The advisory committee is responsible for developing strategic approaches to plan the training and technical assistance that BJA provides to tribes that receive grant funding to construct or renovate juvenile and adult correctional facilities. Specifically, among other things, the agencies work together to plan the training and technical assistance to be delivered to tribes on issues such as alternatives to help control and prevent jail overcrowding, controlling costs to develop and operate detention facilities, developing alternatives to incarceration, and implementing substance abuse and mental health treatment programs at correctional facilities. According to DOJ officials, the advisory committee helps to provide a coordinated federal response that leverages the full scope of agency resources needed to deliver services that meet the tribes’ needs. BIA and DOJ officials have committed to working together to help meet the two agencies’ shared goal to improve the criminal justice crisis in Indian country. To that end, in 2009, DOI, through BIA, and DOJ established both department level and program level coordinating bodies to increase communication and information exchange between the two agencies. At the department level, the Deputy Attorney General and the Deputy Secretary of the Interior jointly chair a working group that meets quarterly to facilitate governmentwide policymaking on tribal justice issues and coordinate agency activities on a range of tribal justice issues that are designed to help BIA and DOJ achieve their individual and shared goal of improving public safety in Indian country. For example, the working group is to oversee BIA and DOJ’s efforts to assess tribal correctional and tribal court systems’ needs and to develop strategies such as prisoner reentry programs in Indian country. In addition, the working group will oversee the implementation of various provisions included in TLOA such as assessing the effectiveness of the enhanced sentencing authority that tribal courts may exercise. At the program level, in 2009, BIA and DOJ established task forces to address key issues including tribal judicial systems and tribal detention, among other issues. The task forces that report to the department level working group are chaired by senior officials from BIA and DOJ and serve as a forum for BIA and DOJ to, where appropriate, jointly address a range of public safety and justice issues in Indian country. For example, as part of the detention task force, BIA and DOJ officials are now working together, in consultation with tribes, to identify alternatives to incarceration in Indian country. According to BIA and DOJ officials, the task force’s activities are to, among other things, support the activities of the department-level working group. For example, the work conducted by the task forces is intended to help facilitate the two agencies’ efforts to develop a long-term plan for submission to Congress in July 2011 that includes proposals on how to address juvenile and adult detention facilities. Although BIA and DOJ have taken action to coordinate their activities, according to officials the agencies’ coordination efforts are in the early stages of development and it is too early to gauge how effective these efforts will be based on six of the eight practices that we have identified for ensuring that collaborating agencies conduct their work in a coordinated manner. We found that the two agencies have defined a common outcome—improving public safety and justice in Indian country—which is one of the eight practices that we have identified for enhancing and maintaining effective collaboration among federal agencies. In our previous work we have reported that it is a good practice for agencies to have a clearly defined outcome, as doing so can help align specific goals across agencies and help overcome differences in agency missions, cultures, and established ways of doing business. Officials told us that as they work toward defining approaches to achieve their common goal there could be a need to take a more strategic approach that incorporates the key collaboration practices that we have identified to help achieve sustainable interagency coordination. To that end, BIA officials told us that in January 2011, they expect to deploy a liaison to DOJ’s Office of Tribal Justice to help foster ongoing sustainable collaboration between the two agencies. The BIA liaison is to work with staff from various DOJ components as the two agencies develop and execute coordinated plans to implement various provisions in TLOA regarding tribal detention and tribal courts, among other tribal justice initiatives. To meet their respective responsibilities to support tribal courts, BIA and DOJ provide funding, training, and technical assistance to tribal courts; however, the two agencies do not leverage each other’s resources—one of the eight collaboration practices that we have identified—by sharing certain relevant information that could benefit each agency’s efforts to enhance the capacity of tribal courts to effectively administer justice in Indian country. In October 2009, DOJ told the leadership of the Senate Indian Affairs Committee that it was taking action to provide better coordination with DOI to ensure that the two agencies’ tribal courts initiatives are coordinated to develop and support tribal courts to help tribal courts build the capacity needed to exercise the enhanced sentencing authority proposed for tribes under TLOA. However, when we met with OJP and BIA program officials in October 2010 and November 2010, respectively, they noted that the information sharing and coordination mechanisms that are in place to support tribal detention initiatives have not extended to tribal courts initiatives. For example: Since 2005, BIA has commissioned reviews of about 90 tribal court systems that include the collection of data such as court funding and operating budget, training needs for court clerks and judges, and technical assistance needs such as developing and maintaining a complete collection of a tribal criminal code. DOJ officials told us that they were vaguely aware of these court reviews but stated they had never seen the reviews or the accompanying corrective action plans. BIA officials told us that DOJ had never requested the court reviews or corrective action plans and that they had never shared this information with DOJ. BIA officials stated that they were aware that DOJ awards competitive grants to tribal courts; however, DOJ does not share information with BIA about which tribal courts have applied for DOJ grants to establish new or enhance existing tribal court systems. BIA officials noted that DOJ could benefit from BIA’s insights and firsthand knowledge about the needs of tribal courts including those tribal courts that BIA has identified as having the greatest need for additional funding. Further, BIA officials noted that they were unaware of the training and technical assistance that DOJ provides to tribal courts and noted that there could be potential unnecessary duplication with the training and technical assistance that both agencies provide as well as inefficient use of scarce resources. For example, according to BIA, there was an instance where DOJ and BIA provided funding to a tribe to purchase the hardware and software for a case management system, but neither DOJ nor BIA consulted each other about the purchase. Ultimately, the tribe did not have any funds to purchase software training and, as a result never used the system. Sharing information about training and technical assistance could help ensure that BIA and DOJ avoid such situations. DOJ officials stated that they frequently hear concerns from tribes that tribal courts lack the funds needed to operate effectively; however, DOJ does not have direct access to information about the funding that BIA provides to tribal courts. According to DOJ officials, gaining access to BIA’s annual funding data could be useful in DOJ’s efforts to implement a more strategic approach to meet the needs of tribal courts. Specifically, officials told us that data on the annual funding to tribal courts could help DOJ to first establish a baseline, then conduct a needs assessment to identify overall needs and then use that information to identify what additional funding, if any, is needed to close the gap between the baseline and overall resource need. We have previously reported that collaborating agencies are most effective when they look for opportunities to leverage each other’s resources, thereby obtaining benefits that may not otherwise be available if the agencies work separately. Further, Standards for Internal Control in the Federal Government call for agencies to enhance their effectiveness by obtaining information from external stakeholders that may have a significant impact on the agency achieving its goals. Developing mechanisms for identifying and sharing information and resources related to tribal courts could yield potential benefits in terms of leveraging efforts already underway and minimizing the potential for unnecessary duplication in federal agencies’ efforts to support tribal courts. Moreover, by sharing information resources, BIA and DOJ could achieve additional benefits that result from the different levels of expertise and capacities that each agency brings. BIA and DOJ officials acknowledged that the two agencies could benefit from working together to share information and leverage resources to address the needs of tribal courts and stated that they would begin taking steps to do so. Because responsibilities for enhancing the capacity of tribal courts is shared among two key federal agencies—DOI and DOJ—effective collaboration is important to operating efficiently and effectively and to producing a greater public benefit than if the agencies acted alone. Although the two agencies have information regarding tribal courts that could be of benefit to the other, they have not fully shared their information with each other. As a result, they have missed opportunities to share information that could be used to better inform decisions about funding and development of training and technical assistance that meets the tribes’ needs. Developing mechanisms for better sharing information about tribal courts could help the agencies ensure they are targeting limited federal funds to effectively and efficiently meet the needs of federally recognized tribes. To maximize the efficiency and effectiveness of each agency’s efforts to support tribal courts by increasing interagency coordination and improving information sharing, we recommend that the Attorney General and the Secretary of the Interior direct DOJ’s Office of Justice Programs and BIA’s Office of Justice Services, respectively, to work together to develop mechanisms, using GAO collaboration practices as a guide, to identify and share information and resources related to tribal courts. We provided a draft of this report to DOI and DOJ for review and comment. The DOI audit liaison stated in an e-mail response received on January 25, 2011, that DOI agreed with the report’s findings and concurred with our recommendation; however, DOI did not provide written comments to include in our report. DOJ provide written comments that are reproduced in appendix IV. DOJ concurred with our recommendation and noted that OJP’s Bureau of Justice Assistance has begun discussions with BIA’s Office of Justice Services about plans to, among other things, coordinate training activities and share funding information regarding tribal courts. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Attorney General of the United States, the Secretary of the Interior, and appropriate congressional committees. This report will also be available at no charge on our website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-9627 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. We were asked to review the challenges facing selected tribal justice systems as well as federal agencies’ efforts to coordinate their activities to support tribal justice systems. Specifically, we prepared this report to answer the following questions: 1. What challenges do tribes face in adjudicating Indian country crimes and what federal efforts exist to help address those challenges? 2. To what extent have the Department of the Interior (DOI) and Department of Justice (DOJ) components collaborated with each other to support tribal justice systems? To identify the challenges facing tribes in adjudicating criminal matters in Indian country and what federal efforts exist to help address those challenges, we met with tribal justice officials such as judges, prosecutors, law enforcement officers, and court administrators from a nonprobability sample of 12 federally recognized tribes in Arizona, New Mexico, North Dakota, and South Dakota. We selected the tribes based on several considerations. First, we identified the U.S. Attorney district offices that received the largest volume of Indian country criminal matters from fiscal years 2004 through 2008, the five most recent years of available data at the time we conducted our selection. We interviewed DOJ officials about the data-entry process, performed electronic testing for obvious errors in accuracy and completeness of the data, and reviewed database documentation to determine that the data were sufficiently reliable for the purpose of our review. Next, we considered a variety of factors including (1) reservation land size, (2) population, (3) types of tribal court structures, (4) number and type of courts, and (5) number of full-time judicial personnel such as judges and prosecutors. The selected tribes have a range of land and population size, court size, and tribal court structures such as traditional and modern court systems. We also obtained documentation on the tribal courts’ operations, caseload, and funding. Because we are providing the caseload and funding data for informational purposes only, we did not assess the reliability of the data we obtained from the tribes. Additionally, we obtained the tribe’s perspectives on the federal process to communicate declination decisions. In light of the public safety and justice issues underlying the requests for this work and the focus in the Tribal Law and Order Act of 2010 (TLOA) on criminal matters, we focused on criminal rather than civil law matters during the course of this review. While the results of these interviews cannot be generalized to reflect the views of all federally recognized tribes across the United States, the information obtained provided us with useful information on the perspectives of various tribes about the challenges they face in adjudicating criminal matters. Additionally, we identified federal efforts to help support tribal efforts to adjudicate criminal matters in Indian country based on new or amended statutory provisions enacted through TLOA. We also interviewed cognizant officials from the Bureau of Indian Affairs and various DOJ components such as the Federal Bureau of Investigation, the Executive Office of U.S. Attorneys, and select U.S. Attorneys Offices to obtain information about their efforts to implement TLOA provisions to help address the challenges facing tribes in administering justice in Indian country. To determine the extent that DOI and DOJ collaborate with each other to support public safety and justice in tribal communities, we first compared the agencies’ efforts against criteria in Standards for Internal Control in the Federal Government which holds that agencies are to share information with external stakeholders that can affect the organization’s ability to achieve its goals. Next, we identified practices that our previous work indicated can enhance and sustain collaboration among federal agencies and assessed whether DOI and DOJ’s interagency coordination efforts reflected consideration of those practices. For purposes of this report, we define collaboration as any joint activity by two or more organizations that is intended to produce more public value than could be produced when the organizations act alone. We use the term “collaboration” broadly to include interagency activities that others have defined as cooperation, coordination, integration, or networking. Eight practices we identified to enhance and sustain collaboration are as follows: (1) define and articulate a common goal; (2) establish mutually reinforcing or joint strategies to achieve that goal; (3) identify and address needs by leveraging resources; (4) agree on roles and responsibilities; (5) establish compatible policies, procedures, and other means to operate (6) develop mechanisms to monitor, evaluate, and report on results; (7) reinforce agency accountability for collaborative efforts through agency plans and reports; and (8) reinforce individual accountability for collaborative efforts through performance management systems. In this report, we focused on two of the eight practices—defining and articulating a common goal and identifying and addressing needs by leveraging resources—that we previously identified for enhancing and maintaining effective collaboration among federal agencies. We were not able to address the remaining six practices because we found that DOI and DOJ were in the early stages of implementing these two practices that serve as the foundation for the remaining practices. For example, because collaboration activities are in the early stages of development and the agencies have not yet established joint strategies to achieve the goal of enhancing the capacity of tribal courts, we did not expect the agencies to have developed mechanisms to monitor and report on the results of their collaboration, reinforce accountability by preparing reports, or establish performance management systems. We selected examples that, in our best judgment, clearly illustrated and strongly supported the need for improvement in specific areas where the key practices could be implemented. We met with officials from DOI and various DOJ components such as the Office of Tribal Justice and Office of Justice Programs to discuss the mechanisms they have put in place to enhance and sustain collaboration between the two agencies. We conducted this performance audit from September 2009 through February 2011 in accordance with generally accepted auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence provides a reasonable basis for our findings and conclusions based on our audit objectives. The exercise of criminal jurisdiction in Indian country depends on several factors, including the nature of the crime, the status of the alleged offender and victim (that is, whether they are Indian or not) and whether jurisdiction has been conferred on a particular entity by, for example, federal treaty or statute. As a general principle, the federal government recognizes Indian tribes as “distinct, independent political communities” with inherent powers of self-government to regulate their “internal and social relations,” which includes enacting substantive law over internal matters and enforcing that law in their own forums. The federal government, however has plenary and exclusive authority to regulate or modify the powers of self-government the tribes otherwise possess, and has exercised this authority to establish an intricate web of jurisdiction over crime in Indian country. Enacted in 1817, the General Crimes Act (also referred to as the Federal Enclaves Act or Indian Country Crimes Act), as amended, established federal criminal jurisdiction in Indian country over cases where either the alleged offender or the victim is Indian. It did not, for example, establish federal jurisdiction over cases where both parties are Indian and, in effect, left jurisdiction over cases where both parties are non-Indian to the state. Enacted in 1885, the Major Crimes Act extended federal criminal jurisdiction in Indian country to Indians who committed so-called “major crimes,” regardless of the victim’s status. As amended, the Major Crimes Act provides the federal government with criminal jurisdiction over Indians charged with felony-level offenses enumerated in the statute. The tribes retained exclusive jurisdiction over other criminal offenses (generally, misdemeanor-level) where both parties are Indian. State governments, however, may not exercise criminal jurisdiction over Indians or their property in Indian country absent a “clear and unequivocal grant of that authority” by the federal treaty or statute. Enacted in 1953, Public Law 280 represents one example of a “clear and unequivocal” grant of state criminal jurisdiction. As amended, Public Law 280 confers exclusive criminal jurisdiction over offenses committed in Indian country to the governments of six states—Alaska, California, Minnesota, Nebraska, Oregon, and Wisconsin, except as specified by statute, thereby waiving federal jurisdiction under the General and Major Crimes acts in these states and subjecting Indians to prosecution in state court. Subsequent amendments to Public Law 280 and other laws further define state criminal jurisdiction in Indian country. To summarize the foregoing discussion, the exercise of criminal jurisdiction by state governments in Indian country is generally limited to two instances, both predicated on the offense occurring within the borders of the state—where both the alleged offender and victim are non-Indian, or where a federal treaty or statute confers, or authorizes a state to assume, criminal jurisdiction over Indians in Indian country. Otherwise, jurisdiction is distributed between federal and tribal governments. Where both parties to the crime are Indian, the tribe generally has exclusive jurisdiction for misdemeanor-level offenses, but its jurisdiction runs concurrent with the federal government for felony-level offenses. Where the alleged offender is Indian but the victim is non-Indian, tribal and federal jurisdiction is generally concurrent. Finally, federal jurisdiction is exclusive where the alleged offender is non-Indian and the victim is Indian. When a tribal government exercises its jurisdiction to prosecute an Indian offender, it must do so in accordance with the Indian Civil Rights Act (ICRA). Enacted in 1968, ICRA limited the extent to which tribes may exercise their powers of self-government by imposing conditions on tribal governments similar to those found in the Bill of Rights to the U.S. Constitution. For example, the act extended the protections of free speech, free exercise of religion, and due process and equal protection under tribal laws. With respect to alleged criminal conduct, tribes are prohibited from trying a person twice for the same offense (double jeopardy), compelling an accused to testify against himself or herself in a criminal case, and imposing excessive fines or inflicting cruel and unusual punishment. Tribes must also afford a defendant the rights to a speedy and public trial, to be informed of the nature and cause of the accusation, to be confronted by witnesses of the prosecution, to have compulsory process for witnesses in his favor, and to be represented by counsel at his own expense, among other things. ICRA also governs the sentencing authority tribes exercise over convicted Indian offenders. First, any person accused of an offense punishable by imprisonment has the right, upon request, to a trial by jury of not less than six persons. Second, the act limits the maximum sentence a tribe may impose. Prior to amendments made by the Tribal Law and Order Act (TLOA) in July 2010, ICRA limited the maximum sentence for any one offense to a term of 1 year imprisonment, a $5,000 fine, or both, regardless of the severity of the alleged offense. The July 2010 amendments, however, authorize tribal courts to impose sentences in excess of 1 year imprisonment or $5,000 fine if the tribe affords the defendant certain additional protections specified in the statute. Specifically, a tribal court may subject a defendant to a maximum term of imprisonment of 3 years (or a fine not to exceed $15,000, or both) for any one offense if the defendant had been previously convicted of the same or a comparable offense by any jurisdiction in the United States, or the defendant was prosecuted for an offense comparable to one punishable by more than 1 year of imprisonment if prosecuted by the United States or any of the states. To exercise this enhanced sentencing authority, the tribe must afford a criminal defendant the following additional protections: effective assistance of counsel; if indigent, the assistance of a licensed defense attorney appointed at the tribe’s expense; a presiding judge with sufficient legal training and a license to practice law; prior to charging the defendant, make publicly available the criminal laws and rules of evidence and criminal procedure of the tribal government; and maintain a record (audio or otherwise) of the criminal proceeding. Finally, although ICRA protects alleged offenders from double jeopardy in tribal courts, neither the federal government nor the tribal government is precluded from pursuing a prosecution if the other sovereign elects to prosecute the case. Therefore, by example, a criminal defendant prosecuted in tribal court may still face prosecution, and a potentially more severe sentence if convicted, in federal court. This appendix summarizes information regarding the court systems of the 12 tribes we visited in Arizona, New Mexico, North Dakota, and South Dakota. Specifically, in Arizona, we visited Gila River Indian Community, Navajo Nation, and Tohono O’odham Nation. New Mexico tribes we covered include the Pueblos of Isleta, Laguna, Pojoaque, and Taos. In North Dakota, we included Standing Rock Sioux and Three Affiliated Tribes. Lastly, the South Dakota tribes we visited include Cheyenne River Sioux, Oglala Sioux, and Rosebud Sioux tribes. The 12 tribes that we visited ranged in enrollment from 417 members to nearly 300,000 tribal members. Tribal enrollment data showed that for 9 of the 12 tribes we visited, more than 50 percent of the enrolled members live on the reservation. Enclosed in this appendix are individual summaries for each tribe that include a description of: (1) land area and population data, (2) establishment of the court system, (3) availability of tribal code and court rules and procedures, (4) structure of the court system, (5) selection and removal of judges as well as requisite qualifications, (6) judicial personnel and court staff, (7) caseload levels, and (8) funding information. The Cheyenne River Indian Reservation of the Cheyenne River Sioux Tribe covers 4,410 square miles in north-central South Dakota, as shown in figure 4, and is between Delaware and Connecticut in size. Of the estimated 16,622 enrolled members of the tribe, an estimated 8,000 live on the reservation. The Cheyenne River Sioux Tribe’s constitution, which was adopted in 1935, assigned the duty of establishing a court to the Tribal Council. The court system was established in the late 1930s. Tribal officials stated that the tribe’s judiciary is a separate branch of government. Further, a 1992 amendment to the constitution stated that decisions of tribal courts shall not be subject to review by the Tribal Council. Officials noted that the Judiciary and Codification Committee of the Tribal Council and the Chief Judge, among others, oversee the operations of the tribal court. The Cheyenne River Sioux Tribe’s Law and Order Code, established in 1978, has been amended a number of times and is available in electronic format, according to officials. The Chief Judge reported that the Law and Order Code is modeled after South Dakota laws. The Tribal Council’s Judiciary and Codification Committee is responsible for updating the criminal code. Additionally, members of the tribal court and the tribe’s legal department also assist the Committee in updating the code. According to officials, the tribe follows federal rules of evidence and has adopted rules of criminal and civil procedure as well as a Code of Judicial Conduct that are modeled after federal and state courts. The Cheyenne River Sioux Tribe’s court system is composed of a tribal court, a juvenile court, a mediation court, and an appellate court. Tribal officials consider the court system to be modern, though the mediation court incorporates some traditional practices that promote tribal traditions and values to resolve disputes. In 1992, according to tribal officials, the tribe’s constitution was amended to include a provision that states that decisions of the tribal court may be appealed to the tribe’s appellate court, but shall not be subject to review by the Tribal Council. Tribal judges are elected by voting members of the tribe and must (1) be a member of the Cheyenne River Sioux Tribe, (2) have resided on the reservation for 1 year preceding the election, and (3) be over 25 years of age. We were not able to obtain complete information about the required qualifications for judges and the tribe’s process to select and remove judges. Information about judicial personnel and court staff are not reported as we were not able to obtain complete information from the tribe. Data about the court’s caseload for fiscal years 2008 through 2010 are not included as we were not able to obtain complete information from the tribe. BIA reported that for fiscal years 2008 and 2009, it did not distribute any funding to Cheyenne River Sioux Tribe specifically for tribal court programs. In fiscal year 2010, BIA distributed $190,503 to the tribe, but we were not able to obtain information from the tribe on how much funding was allocated to tribal court programs. Further, DOJ did not award any grant funding to Cheyenne River Sioux Tribe as part of its Tribal Court Assistance Program (TCAP) for fiscal years 2008 through 2010. The Gila River Indian Reservation covers 584 square miles in Arizona, and is between the District of Columbia and Rhode Island in size. Of the estimated 20,590 enrolled members of the tribe, approximately 82 percent, or 16,783, live on the reservation. The Gila River Indian Community’s constitution, adopted in 1960, authorized but did not establish a court system or articulate its jurisdiction or powers, leaving this to the Tribal Council. Although the council exercised its authority to establish a court system, there is no formal document marking when this occurred. The tribe has efforts underway to adopt a revised constitution, which seeks to establish a separate judicial branch that is autonomous and independent of other branches of the tribal government. The draft constitution calls for a court system that is comprised of a tribal court known as the Community Court, Supreme Court, and other lower courts, including forums for traditional dispute resolution, as deemed necessary by the legislature. Gila River Indian Community has civil, criminal, traffic, and children’s codes. Officials noted that the current criminal code may not be applicable to address new uses of technology to commit crime. The children’s code was most recently revised in 2010 and now addresses gang-related offenses, according to officials. Some procedural guidance is provided by legislation, but the tribal court does not have formal rules of criminal procedures since the court has not been granted authority to promulgate such rules. However, officials explained that the tribal court has developed an administrative order and understanding between parties for some rules. The court has not established rules of evidence; although it will occasionally incorporate state or federal rules of evidence as permitted by the criminal code. Officials describe the court as modern because it is modeled after the state of Arizona’s judicial system. The court system is composed of a tribal court, children’s court, and appellate court. The children’s court was officially established by statute in 1983. Gila River has two courthouses: a main court located in Sacaton, Arizona, and another located in Laveen, Arizona. The Chief Judge and five Associate Judges are elected by tribal members to the general jurisdiction court for 3-year terms. Additionally, two judges are appointed to the children’s court by the Tribal Council for 4-year terms. The general jurisdiction court consists of six elected judicial positions with all judges up for election at the same time. Judges must be a member of the tribe and be at least 25 years old, among other requirements. Certain residency requirements must also be met. The Tribal Council can remove a judge from office for any reason it deems cause for removal. One of the eight judges in the tribal court is law-trained; however, there are no requirements that judges are to be law-trained or licensed by a state or tribal bar association. Public defenders and prosecutors are required to be law-trained and licensed by a state bar association. The tribe has six public defenders and nine prosecutors. Criminal cases accounts for the majority of the tribal court’s caseload. For fiscal years 2008 through 2010, the tribal government funded at least 90 percent of the Gila River Indian Community Court, and the court did not receive any funding from BIA. According to tribal court officials, the court was awarded $13,000 in fiscal years 2008 and 2009 through the Juvenile Accountability Block Grant (JABG)—a grant program that is administered by Office of Juvenile Justice and Delinquency Prevention within DOJ. In fiscal year 2009, the tribal court was awarded $49,977 in grant funding under DOJ’s Justice and Mental Health Collaboration Program. Further, in fiscal year 2010, the Gila River court system was awarded $499,586 in grant funding as part of DOJ’s Coordinated Tribal Assistance Solicitation. The Pueblo of Isleta covers 331 square miles in New Mexico and is between the District of Columbia and Rhode Island in size. Of the estimated 3,496 enrolled members of the pueblo, 58 percent, or 2,013 live on the pueblo’s lands. The most recent revision to the constitution of the Pueblo of Isleta was adopted in 1991; however, according to tribal officials, Isleta has efforts underway to amend its constitution. In an effort to help address concerns about the court’s perceived lack of autonomy, according to Isleta officials, the Tribal Council established the Judicial Law and Order Committee to conduct a review of the constitution that includes examining the authorities of each branch of tribal government. The Pueblo of Isleta’s Law and Order Code was first adopted in 1965 and revised in 2008. The Tribal Council established a committee to recommend amendments regarding the code to the Council. The Pueblo of Isleta’s court system is composed of a tribal and appellate court. The tribal court is presided over by one or more judges and has jurisdiction over all criminal and civil matters articulated in the Law and Order Code. The majority of the court’s cases are adjudicated by applying federal or state law; however, the court seeks first to apply traditional law in cases where it may be applicable. The Tribal Council serves as the appellate court, and appeals are granted as a matter of right. However, the council may delegate its appellate authority to an appeal committee, appellate judge, or other appellate body established by the council. The constitution holds that all appeals decisions are final. Judges are appointed by the tribal governor with the concurrence of a two- thirds majority of the council. According to the constitution, the Tribal Council is to prescribe the qualifications and terms of office for judges. The constitution states that judges’ salaries may not be modified during the judges’ term in office. The council is currently drafting an ordinance establishing qualifications and salaries for judges. Those convicted of felonies are not eligible to serve as a judge. Judges can be removed from office after a hearing and a two-thirds vote of the full council. Because of funding limitations, according to officials, criminal investigators also serve as tribal prosecutors. Data about the court’s caseload for 2008 through 2010 are not reported here as we were not able to obtain this information from the tribe. BIA told us that it distributed $76,923, $128,279, and $99,071 in fiscal years 2008, 2009, and 2010, respectively. We were not able to obtain information from the tribe on how much of the funding was provided to the tribal court. Our review of DOJ grants awarded under the Tribal Court Assistance Program showed that the Pueblo of Isleta did not receive any grant funding for tribal courts initiatives for fiscal years 2008 through 2010. The Pueblo of Laguna reservation covers 779 square miles in New Mexico and is between the District of Columbia and Rhode Island in size. Of the estimated 8,413 enrolled members in the pueblo, 4,315 live on or near the pueblo’s lands; Laguna’s total population, including nonpueblo members, is estimated at 5,352. The Pueblo of Laguna’s constitution, adopted in 1908, empowered the pueblo’s Governor and certain members of the Tribal Council to function as the pueblo’s court. A subsequent version of the constitution, adopted in 1949, maintained this judicial structure. In 1958, the pueblo amended its constitution and thereby vested the Pueblo’s judicial power in the Pueblo’s tribal court, and in 1984, another constitutional amendment vested the pueblo’s judicial power in the pueblo’s tribal court and in an appellate court. Currently, the pueblo’s Governor and certain members of the Tribal Council serve as the pueblo’s appellate court, according to tribal officials. The pueblo has a written criminal code that was enacted in 1999, according to officials. The Tribal Secretary is responsible for keeping ordinances enacted by the Tribal Council. Revisions to the criminal code were pending adoption by the Tribal Council as of October 2010. The pueblo is in the process of adopting rules of judicial conduct and criminal procedure. The Pueblo of Laguna’s court system combines aspects of modern and traditional courts. The court relies on the written codes and laws of the pueblo, but they may also defer to the pueblo’s traditions, when possible. The pueblo’s court system includes a tribal court that adjudicates both civil and criminal matters, a juvenile court, and an appellate court that reviews cases from the lower courts. The appellate court is composed of the Governor and certain members of the Pueblo Council, though this composition of the appellate court is not provided for by constitution or code; rather it is to be established by ordinances passed by the Pueblo Council. Judges must be law-trained, have a state bar license, and must have at least 1 year of judicial experience or related law practice, among other things. Judges are appointed by the Tribal Council for a term that does not exceed 3 years, and may be removed from office if convicted of a felony or if found to have grossly neglected the duties of the office. The Pueblo of Laguna’s court system employs one full-time contract judge and three part-time contract judges. In addition, the tribe employs two prosecutors, and a public defender, among other staff. Traffic offenses, which are not reported in table 7 below, account for a large portion of the court’s activity and are considered criminal offenses. For example, there were 2,685 traffic cases opened in 2009. The Pueblo of Laguna court system’s main funding sources are the tribal government and funding from the BIA. Additionally, in fiscal year 2010 the Pueblo of Laguna was awarded $350,000 for tribal courts initiatives under DOJ’s Coordinated Tribal Assistance Solicitation grant program. The Navajo Nation’s land area totals 24,097 square miles and is mostly situated in Arizona though its boundaries extends into parts of New Mexico and Utah. The reservation is between Maryland and West Virginia in size. Of the estimated 292,023 enrolled members of the Navajo Nation, approximately 234,124, or about 80 percent, live on the reservation. The Navajo Nation does not have a written constitution. However, the duties of the court system are documented in the Navajo Nation Codes. The tribal court was established in 1959. The Navajo Nation criminal code was created in 1959 and has been amended as necessary. The Legislative Council, within the legislative branch, is responsible for updating the code. The court system has rules of judicial conduct, criminal procedure, as well as rules of evidence. Officials described the Navajo Nation court system as a modern system that continues to embody Navajo customs and traditions. The Chief Justice is the administrator of the judicial branch, which consists of 10 District Courts, the Supreme Court of the Navajo Nation, and other courts that may be created by the Navajo Nation Council. The Navajo Nation Supreme Court comprises one Chief Justice and two Associate Justices. The President of the Navajo Nation appoints Judges and Justices, who are appointed for a 2-year probation period. The appointees are selected from a panel recommended by the Judicial Committee of the Navajo Nation Council. After 2 years, the Judicial Committee can recommend a permanent appointment. If the Judge or Justice is recommended, the President submits the name to the Navajo Nation Council for confirmation. There are no term lengths; however, judges can be removed for cause. All judicial appointments must meet certain qualifications, including a higher education degree, preferably a law degree, and have work experience in law-related fields and a working knowledge of Navajo, state, and federal laws. Judges must be a member of the Navajo Nation Bar Association. Only members in good-standing with the Navajo Nation Bar Association, including public defenders and prosecutors can provide legal representation in the court system. The data provided in table 9 below comprises caseload information from the 10 District Courts, Family Courts, Probation, Peacemaking, and Supreme Court. As shown in the table below, criminal offenses account for much of the court’s activity. The Navajo Nation judicial branch is funded primarily by the tribal government. It is important to note that the funding supports the operations of the 10 districts courts, among other courts within the judiciary branch of the Navajo Nation. The Pine Ridge Indian Reservation of the Oglala Sioux Tribe covers 3,466 square miles in Southwest South Dakota, and is between Delaware and Connecticut in size. Of the estimated 47,000 enrolled members of the tribe, an estimated 29,000 Indian people live on the reservation. The Oglala Sioux Tribe’s court system was established by the tribe’s constitution in 1936. A 2008 amendment to the tribe’s constitution vests the tribe’s judicial power in one Supreme Court and in other inferior tribal courts established by the Tribal Council. As amended, the constitution provides that the tribe’s judiciary is independent from the legislative and executive branches of government. The Judiciary Committee of the Tribal Council oversees the administrative function of the court. In September 2002, the Oglala Sioux Tribal Council passed an ordinance to adopt its Criminal Offenses Code. In addition, the Oglala Sioux Tribe has adopted criminal procedures and court rules, which includes a judicial code of ethics. According to court officials, the tribal court generally applies federal rules of evidence. Further, the Tribal Council, through Judiciary Committee, is responsible for maintaining and updating the Criminal Offenses Code. he Oglala Sioux Tribe’s court system combines aspects of modern and T traditional approaches to administer justice, and is composed of the Supreme Court, a tribal court, and a juvenile court. The Supreme Cou has appellate jurisdiction, and is composed of a Chief Justice, two Associate Justices, and one Alternate Justice. Given the vast size o f the reservation, the tribe operates two courthouses, which are located in Pin Ridge, South Dakota and Kyle, South Dakota. The Oglala Sioux Tribe’s court system comprises a Chief Judge, associate judges, and Supreme Court justices. The Chief Judge of inferior courts, who oversees the inferior courts, must be law-trained and bar-licensed in any state or federal jurisdiction, and is elected by members of the tribe for a 4-year term. Justices of the Supreme Court must be law-trained and bar- licensed in any state or federal jurisdiction. They are appointed by the Tribal Council for 6-year terms. Any judge may be removed by a two-th vote of the Tribal Council for unethical judicial conduct, persistent failure to perform judicial duties, or gross misconduct that is clearly prejudicial to the administration of justice, among other things. The Oglala Sioux Tribe’s court system employed a Chief Judge, three associate judges, and two Supreme Court justices. The Oglala Sioux Attorney General’s Office employed four tribal prosecutors—one of w is law-trained and bar licensed. Officials estimated that in 2009, there were approximately 1,245 civil cases and 7,470 criminal cases. Additional data about the court’s caseload for fiscal years 2008 through 2010 are not reported as we were not able to obtain this information from the tribe. Based on data provided by the tribe, the Oglala Sioux court system did not receive any funding from the tribal government for fiscal years 2008 through 2010. Rather, the main source of funding was from BIA. The Pueblo of Pojoaque covers 21 square miles in New Mexico, and is smaller in size than the District of Columbia. Of the estimated 417 enrolled members of the pueblo, an estimated 325 enrolled members live on the pueblo’s lands. The Pueblo of Pojoaque has not adopted a constitution, and, according to a court official, the tribal government operates in a traditional manner. From 1932 to 1978, the Pueblo of Pojoaque’s Tribal Court operated according to tradition. For example, the pueblo’s Governor or the Tribal Council served as the tribal court. In 1978, the tribal code formally established a court system. There are no distinct branches of government within the Pueblo of Pojoaque and a court official stated that the Tribal Council does not intervene in individual cases before the court. When the tribal court has concerns about the direction of the Tribal Council regarding court matters, such concerns are discussed openly at Tribal Council meetings and resolutions are passed and incorporated in the Tribal Law and Order Code, as needed. According to a court official, the Pueblo of Pojoaque’s Tribal Law and Order Code was adopted in 1978. One of the court officials explained that the court’s judges are responsible for suggesting code revisions to the Tribal Council, and that the Tribal Council amends the code by resolutions. Further, complete copies of the Tribal Law and Order Code are made available through the court. The Tribal Law and Order Code includes a criminal code as well as basic rules of procedure and evidence as many of the parties appearing before the court typically advocate on their own behalf rather than being represented by an attorney. The court system has adopted rules of judicial conduct, and, pursuant to the law and order code, judges are permitted to defer to either state or federal rules of procedure or evidence, and, according to the Chief Judge, this option is often exercised when both parties appearing before the court have legal representation. The Pueblo of Pojoaque’s court system combines aspects of modern and traditional courts, and includes a tribal court, a juvenile court, and traditional methods of dispute resolution. The Tribal Council serves as the pueblo’s appellate court. The Pueblo of Pojoaque’s court system includes two types of judges—a Chief Judge and judges pro tempore—and the qualifications for these positions are identical. Judges are appointed by the Tribal Council and serve at the pleasure of the Pueblo Council and the Tribal Governor. Though there are no set educational requirements for judges, prospective judges who do not have a law degree must complete a specific training course in judicial proceedings within 6 months after being appointed as a judge. Age requirements and a background interview also apply. Given the small population of the pueblo, the Tribal Council prohibits judges, who are enrolled members of the pueblo, from hearing cases of other enrolled members, according to a court official. The Pueblo of Pojoaque court system employed one full-time Chief Judge, one part-time judge pro tempore; two contract judges pro tempore, as needed; one part-time court clerk; and one full-time court and traffic court clerk. Tribal police, who are not law-trained, serve as prosecutors. The caseload data reported below in table 11 does not reflect the number of civil and criminal matters that are resolved through traditional means and mediation. Traffic violations, which are not included in the table below, account for much of the court’s activity. For example, in 2009, there were 7,316 traffic citations docketed, of which 825 resulted in a court hearing. The Pueblo of Pojoaque court system’s main funding sources are the tribal government and BIA funding. Generally, for fiscal years 2009 and 2010, the BIA funding accounted for about 30 percent of the court’s total funding. The Rosebud Indian Reservation of the Rosebud Sioux Tribe covers 1,971 square miles in south-central South Dakota, as shown in figure 11 below, and is between Rhode Island and Delaware in size. Of the estimated 29,710 enrolled members of the tribe, approximately 85 percent, or 25,254, live on the reservation. The Rosebud Sioux Tribe’s court was established in 1975, according to officials, replacing the Court of Indian Offenses administered by BIA. A 2007 amendment to the tribe’s constitution, which was originally adopted in 1935, established the tribal court as separate and distinct from the legislative and executive branches of the tribal government and established the Rosebud Sioux Tribe Supreme Court as the tribe’s appellate court. The Tribal Council’s Judiciary Committee helps to oversee the administration of court. The Rosebud Sioux Tribe’s Law and Order Code was adopted in 1986 and is available by request from the Tribal Secretary’s office, although tribal court officials indicated that the status of the code has been an ongoing concern. The Law and Order Code contains a criminal code and rules of criminal procedure. Additionally, officials noted that the code adopts by reference federal rules of evidence and requires tribal judges to conform their conduct to the Code of Judicial Conduct as adopted by the American Bar Association. The Rosebud Sioux Tribe’s court system is composed of a tribal court, a juvenile court, a limited mediation court, and an appellate court. While the court applies traditional methods of dispute resolution, officials described the court system as mostly modern in that it is modeled on federal and state court systems and applies federal rules of evidence and judicial conduct. It is traditional in that the Law and Order Code, which the courts apply, contains references to tribal customs. Further, in some cases, tribal courts include interested community members in the court proceedings. For example, in some family disputes, members of the community such as family members or concerned citizens may participate in the court process even though they are not parties appearing before the court. Decisions of the tribal court and juvenile court are subject to appellate review by the Rosebud Sioux’s Supreme Court. The Supreme Court is composed of six justices, three of whom sit as a panel to hear a case. The Rosebud Sioux Tribe’s court system includes a Chief Judge, associate judges, and Supreme Court justices. The Chief Judge must be law-trained, bar-licensed, and admitted to practice before the U.S. District Court for South Dakota. The Chief Judge is appointed by the Tribal Council for a 4- year term. Associate judges are appointed by the Tribal Council for 2-year terms, and must have a high-school education or equivalent. Further, at least one associate judge must be bilingual in English and Lakota—the tribe’s traditional language. Of the three justices in an appellate panel, two must be law-trained, bar-licensed, and admitted to practice in the U.S. District Courts of South Dakota. One may be a lay judge who must have a high-school education or equivalent. Supreme Court justices are appointed by the Tribal Council for 5-year terms. Removal of any judge or justice must be for cause after a public hearing by the Tribal Council and by a two-thirds vote of Tribal Council members present at the hearing. As of October 2010, the Rosebud Sioux Tribe’s court system employed a Chief Judge, two associate judges—one law-trained but not bar-licensed, and the other a lay judge—and four Supreme Court justices. There is one law-trained, bar-licensed tribal prosecutor, an assistant prosecutor who works mainly in juvenile court, a public defender, and an assistant public defender who works mainly in juvenile court. Additionally, in fiscal year 2010, the tribe received a DOJ grant to fund three additional attorney positions, though tribal officials stated that these positions may be difficult to fill because of recruitment and retention challenges. Tribal officials stated that the numbers of prosecutors and public defenders is inadequate for the tribes’ caseload and affects the tribe’s ability to effectively administer justice. Criminal offenses account for much of the court’s caseload. Traffic violations are considered criminal offenses; however, they are not included in the data in the table below. Based on data provided by officials for fiscal years 2008 through 2010, the Rosebud Sioux Tribe court system is primarily funded by BIA, although the court received funding from other sources. The Standing Rock Reservation covers 3,654 square miles in south-central North Dakota and north-central South Dakota, and is between Connecticut and Delaware in size. Of the estimated 14,914 enrolled members of the tribe, 8,656 live on the reservation. The Standing Rock Sioux Tribe Constitution, adopted in 1959, empowers the Tribal Council to establish courts on the reservation and define those courts’ duties and powers. Exercising this constitutional authority, the Standing Rock Sioux Tribal Council established the tribal court system. Further, the constitution vests the tribe’s judicial authority in a Supreme Court and in a Tribal Court and specifies the process by which judges for these courts would be selected and removed, as described below. Subsequent amendments to the tribe’s constitution did not alter these provisions. The Standing Rock Sioux Tribe’s Code of Justice addresses criminal offenses, criminal procedure, and civil procedure, among other things. In addition, the Tribe’s Rules of Court include provisions regarding civil procedure, criminal procedure, rules of evidence, among other things. However, court officials reported challenges in keeping the code current and stated that they do not have access to the entire code. The court system is composed of a tribal court, a children’s court, and a Supreme Court that has appellate jurisdiction over the tribe’s other courts. The Supreme Court is composed of a chief justice and two associate justices. The Code of Justice articulates the composition of the court as well as the qualifications, selection, and removal of judges. Specifically, the Supreme Court is to include a Chief Justice and Associate Justices. Additionally, the tribal court is to include a Chief Judge, Associate Chief Judge, and Associate Judges. The Chief Justice, Chief Judge, and Associate Chief Judge must be law-trained and bar-licensed. Associate justices and judges must have at least a high-school diploma or its equivalent. All justices and judges are appointed by the Tribal Council and face a retention election at the tribe’s next election. Justices and judges retained then serve 4-year terms and may be removed from office for cause by a two-thirds vote of the Tribal Council. The Standing Rock Sioux Tribe’s court system employed three appellate judges, four tribal court judges, six court clerks, two prosecutors, one public defender, among other staff. Of the four tribal court judges, three are bar-licensed and one is law-trained but not bar-licensed. Of the three appellate judges, two are bar-licensed and one is a lay judge. Criminal offenses account for much of the court’s caseload. Traffic violations are considered criminal offenses; however, they are not included in the data in the table below. For fiscal years 2008 through 2010, the Standing Rock Sioux Tribal Court did not receive any funding from the tribal government and federal funding is the primary source of funding for the court, based on data provided by officials. The BIA funding has remained unchanged during this time. Additionally, officials told us that they received grant funding from the South Dakota Department of Corrections totaling $15,000 and $25,000 in fiscal years 2009 and 2010, respectively. The Pueblo of Taos covers 156 square miles north of Santa Fe, New Mexico, and is between the District of Columbia and Rhode Island in size. Of the estimated 2,500 enrolled members of the pueblo, approximately 1,800 members live on the pueblo’s lands. The Pueblo of Taos does not have a written constitution and has not established a separate judicial branch within its tribal government. Rather, according to officials, the pueblo has an unwritten social order that dates back to the pueblo’s origins and continues to be practiced and adhered to. Officials noted that they are exploring the possibility of establishing three distinct branches within the tribal government that would include a judicial branch. The Pueblo is governed by a Tribal Governor and a War Chief, both of whom are appointed by the Tribal Council for a 1-year term and operate the pueblo’s traditional courts. In 1986, the Tribal Council adopted the pueblo’s law and order code. Tribal officials explained that the tribal court is responsible for updating the criminal code and the Tribal Council approves amendments or revisions. The Pueblo has not fully revised the code since its adoption but has efforts underway to update and revise the criminal code. The tribal court does not have rules of judicial conduct or rules of evidence. However, the tribal court applies federal rules of evidence and New Mexico state rules regarding judicial conduct. Officials noted that rules of judicial conduct and rules of evidence are to be developed as part of the law and order code update. The code is available in hard copy only, and is generally made available to parties appearing before the court. Officials expect that the law and order code will be available in electronic format once revisions are completed. The Pueblo of Taos has two traditional courts and one tribal court. The Lieutenant Governor of the tribe serves as a Traditional Court Judge to hear both civil matters, such as contract violations, and family disputes. The War Chief also serves as a Traditional Court Judge and generally hears civil cases that involve disputes over land, natural resources, and fish and wildlife. The tribal court was established in the late-1980s to provide tribal members an alternative dispute resolution forum and to address the changes in the types of crimes being committed on the pueblo’s lands. Further, according to officials, the tribal court is intended to supplement rather than replace the traditional courts. Officials explained that tribal members may choose to have their case heard before the traditional or tribal court; however, once the case is filed with either court, the parties cannot then request a transfer to the other court. The Pueblo of Taos does not have an appellate court. However, appeals can be made to the Traditional Court Judge, usually the Lieutenant Governor, to challenge tribal court decisions. In the future, the Pueblo of Taos may use the Southwest Intertribal Court of Appeals. The Chief Judge is retained under contract, and the contract can be issued for up to 12 months. The Pueblo of Taos has not yet established requirements regarding selection, removal, and qualifications of judges, but expects to do so in the future. The pueblo employs one tribal court judge for the modern court, who is not bar-licensed. Additionally, the pueblo does not have pubic defenders or prosecutors; rather, the police, who are not law-trained, serve as prosecutors in addition to their patrol duties. Criminal cases account for much of the court’s activity for fiscal years 2008 through 2010. Based on data provided by officials for fiscal years 2008 through 2010, with the exception of fiscal year 2009, BIA funding accounted for much of the court system’s entire budget. The Fort Berthold Reservation of the Three Affiliated Tribes covers 1,578 square miles in northwest North Dakota, and is between Rhode Island and Delaware in size. Of the 11,993 enrolled members of the tribe, about half live on the reservation. According to officials, the Three Affiliated Tribe’s court system was established by the Tribal Business Council in the 1930s. Further, officials estimated that in the 1990s, an amendment to the constitution established the court’s authority. The Tribal Business Council has a Judicial Committee, composed of tribal council members, that regularly reviews court operations such as funding, staffing, and evaluation, among other things. The Three Affiliated Tribes have a tribal code that, according to a court official, was developed in 1935. The tribal code contains a criminal code, although officials stated that the court does not have rules of criminal procedure. The code also has a section that addresses federal rules of evidence. According to court officials, it is not always clear what the current law is because the tribal code is not kept up-to-date. The Three Affiliated Tribes’ court system combines aspects of modern and traditional courts. The court is modern in that it applies the tribal code; the court is traditional in that tribal members and court staff are personally acquainted, tribal members who appear before the court readily accept tribal laws that regulate conduct on the reservation, and Indian language is sometimes used in court. The court system includes a tribal court and a juvenile court. Appeals from either of these courts are addressed by an intertribal appeals court, the Northern Plains Intertribal Court. The Three Affiliated Tribe’s court system includes a Chief Judge and associate judges, also called magistrate judges. Court officials reported that all judges must be law-trained, bar-licensed members of the tribes. However, at their discretion, the Tribal Council may overrule the requirement that judges must be members of the tribe. The Chief Judge is elected tribal members for a 4-year term. Associate Judges are appointed by the Tribal Council for 1-year terms. All judges may be removed by the Tribal Council for cause. As of November 2010, the Three Affiliated Tribes’ court system employed a law-trained Chief Judge, two law-trained associate judges, a prosecutor, and a public defender, among other staff. Prosecutors are not required to be law-trained or bar-licensed, according to officials. Criminal offenses account for the majority of the court’s caseload. Traffic violations are considered civil matters; however, they are not included in the data in the table below. Based on data provided by the tribe, the Three Affiliated Tribes court systems’ main funding sources are the tribal government and BIA. The Tohono O’odham Nation covers 4,456 square miles within Arizona, although it encompasses land on both sides of the U.S.-Mexico border. Tohono O’odham Nation is between Delaware and Connecticut in size. Of the 29,974 members of Tohono O’odham Nation, approximately 13,035, or 43 percent, live on the reservation. The Tohono O’odham Nation adopted its most recent constitution in 1986, which replaced an earlier constitution from 1937. The constitution established a judicial branch and articulates the powers and duties of the court. The judicial branch is an independent branch within the tribal government, according to officials. The Tohono O’odham Nation’s criminal code was adopted in 1985 and subsequently has been updated by the legislative branch with input from the Tohono O’odham Prosecutor’s Office and Attorney General’s Office. The most updated code is available on the tribe’s website. The judicial branch has adopted Arizona rules of criminal procedure, with modification, and has also adopted Arizona rules of evidence. The Tohono O’odham Nation’s court system is composed of a tribal court, an appeals court, children’s court, family court, traffic court, and criminal court. The chief judge is the constitutionally-mandated administrative head of the judicial branch and oversees the operations and decisions of the court. Appellate cases are heard by a three-judge panel, designated by the chief judge. In order to hear the appeal, the appellate judges must not have presided over the original case. Appeals panel decisions are final. The legislative branch of Tohono O’odham Nation is responsible for the selection of tribal court judges. The judges of Tohono O’odham Nation select a chief judge from among themselves, who serves as the chief administrative officer for the judiciary and serves in that capacity for 2 years. Potential judges pro tempore are referred by the chief judge to the Judiciary Committee of the Tribal Council. All judges are appointed by the legislative branch. The six full time judges mandated by the constitution are appointed for 6-year terms that are staggered. However, judges may be reappointed to the bench upon application. Judges pro tempore are typically appointed to a term of no more than 6 years. Judicial qualifications, which changed in 2008, include preferences for members of federally-recognized Indian tribes, with first preference given to qualified, enrolled members of the Tohono O’odham Nation. Further, persons with felony or recent misdemeanor convictions are not eligible. Finally, the candidate must be either a bar-admitted, Indian-law experienced attorney, or possess a bachelor’s degree and have work experience and training in judicial or law-related fields. Judges may be removed by vote of the Legislative Council upon the petition of a tribal member for felony convictions, malfeasance in office, among other things. Tohono O’odham Nation has 6 full-time judges, 6 prosecutors, 6 full-time public defenders, and approximately 100 support staff, among other staff. Criminal cases accounted for more than 85 percent of the court’s docket as shown in table 20 below. Tohono O’odham Nation’s court was funded, for the most part, by the tribal government during fiscal years 2008 through 2010, though the tribe received BIA funding. Additionally, a court official explained that in fiscal year 2006, DOJ awarded an Indian Alcohol and Substance Abuse grant totaling $500,000 that permitted the tribe to implement the grant over a 3- year period through fiscal year 2009. In addition to the contact named above, William Crocker and Glenn Davis, Assistant Directors and Candice Wright, analyst-in-charge, managed this review. Ami Ballenger and Christoph Hoashi-Erhardt made significant contributions to the work. Christine Davis and Thomas Lombardi provided significant legal support and analysis. David Alexander provided significant assistance with design and methodology. Katherine Davis provided assistance in report preparation. Melissa Bogar and Rebecca Rygg made contributions to the work during the final phase of the review. | The Department of Justice (DOJ) reports from the latest available data that from 1992 to 2001 American Indians experienced violent crimes at more than twice the national rate. The Department of the Interior (DOI) and DOJ provide support to federally recognized tribes to address tribal justice issues. Upon request, GAO analyzed (1) the challenges facing tribes in adjudicating Indian country crimes and what federal efforts exist to help address these challenges and (2) the extent to which DOI and DOJ have collaborated with each other to support tribal justice systems. To do so, GAO interviewed tribal justice officials at 12 tribes in four states and reviewed laws, including the Tribal Law and Order Act of 2010, to identify federal efforts to assist tribes. GAO selected these tribes based on court structure, among other factors. Although the results cannot be generalized, they provided useful perspectives about the challenges various tribes face in adjudicating crime in Indian country. GAO also compared DOI and DOJ's efforts against practices that can help enhance and sustain collaboration among federal agencies and standards for internal control in the federal government. The 12 tribes GAO visited reported several challenges in adjudicating crimes in Indian country, but multiple federal efforts exist to help address some of these challenges. For example, tribes only have jurisdiction to prosecute crimes committed by Indian offenders in Indian country. Also, until the Tribal Law and Order Act of 2010 (the Act) was passed in July 2010, tribes could only sentence those found guilty to up to 1 year in jail per offense. Lacking further jurisdiction and sentencing authority, tribes rely on the U.S. Attorneys' Offices (USAO) to prosecute crime in Indian country. Generally, the tribes GAO visited reported challenges in obtaining information on prosecutions from USAOs in a timely manner. For example, tribes reported they experienced delays in obtaining information when a USAO declines to prosecute a case; these delays may affect tribes' ability to pursue prosecution in tribal court before their statute of limitations expires. USAOs are working with tribes to improve timely notification about declinations. DOI and the tribes GAO visited also reported overcrowding at tribal detention facilities. In some instances, tribes may have to contract with other detention facilities, which can be costly. Multiple federal efforts exist to help address these challenges. For example, the Act authorizes tribes to sentence convicted offenders for up to 3 years imprisonment under certain circumstances, and encourages DOJ to appoint tribal prosecutors to assist in prosecuting Indian country criminal matters in federal court. Federal efforts also include developing a pilot program to house, in federal prison, up to 100 Indian offenders convicted in tribal courts, given the shortage of tribal detention space. DOI, through its Bureau of Indian Affairs (BIA), and DOJ components have taken action to coordinate their efforts to support tribal court and tribal detention programs; however, the two agencies could enhance their coordination on tribal courts by strengthening their information sharing efforts. BIA and DOJ have begun to establish task forces designed to facilitate coordination on tribal court and tribal detention initiatives, but more focus has been given to coordination on tribal detention programs. For example, at the program level, BIA and DOJ have established procedures to share information when DOJ plans to construct tribal detention facilities. This helps ensure that BIA is prepared to assume responsibility to staff and operate tribal detention facilities that DOJ constructs and in turn minimizes potential waste. In contrast, BIA and DOJ have not implemented similar information sharing and coordination mechanisms for their shared activities to enhance the capacity of tribal courts to administer justice. For example, BIA has not shared information with DOJ about its assessments of tribal courts. Further, both agencies provide training and technical assistance to tribal courts; however, they are unaware as to whether there could be unnecessary duplication. Developing mechanisms to identify and share information related to tribal courts could yield potential benefits in terms of minimizing unnecessary duplication and leveraging the expertise and capacities that each agency brings. GAO recommends that the Secretary of the Interior and the Attorney General direct the relevant DOI and DOJ programs to develop mechanisms to identify and share information related to tribal courts. DOI and DOJ concurred with our recommendation. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Coast Guard, a Department of Transportation agency, is involved in seven main mission or program areas: (1) enforcing maritime laws and treaties, (2) search and rescue (3) aids to navigation, (4) marine environmental protection, (5) marine safety and security (including homeland security), (6) defense readiness, and (7) ice operations. Most of the Coast Guard’s services are provided through a number of small boat stations, air stations, marine safety offices, and other facilities and assets located in coastal areas, at sea, and near other waterways like the Great Lakes. Its equipment in operation today includes 228 cutters, approximately 1,200 small patrol and rescue boats, and 200 aircraft. As an organization that is also part of the armed services, the Coast Guard has both military and civilian positions. At the end of fiscal year 2001, the agency had over 39,000 total full-time positions—about 33,700 military and about 5,700 civilians. The Coast Guard also has about 8,000 reservists who support the national military strategy and provide additional operational support and surge capacity during emergencies, such as natural disasters. Also, about 34,000 volunteer auxiliary personnel assist in a wide range of activities ranging from search and rescue to boating safety education. Overall, after adjusting for the effects of inflation, the Coast Guard’s total budget grew by 32 percent between fiscal years 1993 and 2002. During nearly half this period, however, in real terms the budget was basically flat. As figure 1 shows, in constant 2001 dollars, the Coast Guard’s budget remained essentially static from fiscal year 1993 to 1998. Significant increases have occurred since fiscal year 1998. (Dollars in Millions) The Coast Guard’s initial budget request for fiscal year 2002, submitted early in 2001, represents a pre-September 11th picture of how the Coast Guard intended to operate. As figure 2 shows, law enforcement was by far the largest mission category, with budgeted expenses estimated at $1.47 billion, or about 43 percent of total operating expenses. Marine safety and security, at $456 million, was about 13 percent of the total.(Dollars in Millions) Following the events of September 11th, the Congress provided the Coast Guard with a supplemental appropriation of $209 million. After it received this additional amount, the Coast Guard revised the budget allocation for its various missions. As figure 3 shows, the revision produced a doubling of projected expenses for marine safety and security and smaller increases for aids to navigation and search and rescue. By contrast, projected expenses for law enforcement, ice operations, and marine environmental protection were reduced. (Dollars in Millions) For the Coast Guard, the events of September 11th produced a dramatic shift in resources used for certain missions. The Coast Guard responded quickly to the attacks with a number of significant steps to ensure that the nation’s ports remained open and operating. The Coast Guard relocated vessels, aircraft, and personnel from traditional missions—especially law enforcement—to enhance security activities. Subsequently, the Coast Guard has returned some of these resources to their more traditional non- security missions, but in some areas, it faces challenges in restoring the level of activity to what it had been. After September 11th, the Coast Guard responded by positioning vessels, aircraft, and personnel not only to provide security, but also to increase visibility in key maritime locations. Key actions taken included the following: Recalling all cutters that were conducting offshore law enforcement patrols for drug, immigration, and fisheries enforcement and repositioning them at entrances to such ports as Boston, Los Angeles, Miami, New York, and San Francisco. The Coast Guard also used smaller assets, such as patrol boats, motor lifeboats, and aircraft, to supplement increased port security activities. The smaller boats were used mainly for conducting security patrols within port facilities and in fact, became the port’s “cop on the beat,” according to Coast Guard officials. Establishing a new National Vessel Movement Center to track the movement of all foreign-flagged vessels entering U.S. ports of call. The center is now the clearinghouse for vessel information, such as type of cargo and crew manifest. All commercial vessels over 300 gross tons are required to report this information to the center 96 hours in advance of their arrival. This information is then provided to the Coast Guard’s local marine safety offices, which use a risk-based decision model to decide if a specific vessel is considered high interest, thus requiring an escort or additional security and safety inspections or oversight. Implementing a series of limited risk assessments that identified high-risk infrastructure and facilities within specific areas of operation. These assessments, which were done by Coast Guard marine safety office personnel at individual ports, were the basis for deploying small boats for security patrols inside harbors and focused on identified high-threat facilities. Adopting a temporary regulation prohibiting any private vessel from approaching within 100 yards of Navy ships without permission. The Coast Guard is proposing that such a restriction become permanent. Activating and deploying the Coast Guard’s port security units to help support local port security patrols in high-threat areas. To maintain surge capacity and to deploy these units overseas, the Coast Guard also formed five interim marine security and safety teams, using full-time Coast Guard personnel trained in tactical law enforcement and based in Yorktown, Virginia. The Coast Guard is considering adding more of these teams in the future. Recalling about 2,700 reservists to active duty. Today, more than 1,800 are still on active duty. According to Coast Guard officials, reservists have played a major role in allowing the Coast Guard to respond to both its homeland security and other mission functions. Their functions include staffing boat crews and port security units and performing administrative functions in place of active duty personnel who were pressed into new responsibilities elsewhere. The precise extent to which these responses changed the Coast Guard’s allocation of mission resources cannot be determined, mainly because the Coast Guard is still gathering and analyzing the data. However, in our discussions with Coast Guard personnel, we were told that law enforcement activities, such as fisheries and counter drug patrols, saw the greatest reduction in actual services. For example: A number of Coast Guard districts have reported that security activities have impacted their ability to conduct fisheries enforcement missions, such as boarding of recreational and commercial fishing vessels. For example, District 1 reported a drop in fishing boat boardings in the New England fishing grounds, from 300 in the first quarter of fiscal year 2001 to just 38 during the first quarter of fiscal year 2002. Also, law enforcement- related civil penalties and fines were down substantially for the District as well. Districts also reported reduced drug interdiction efforts. For example, prior to September 11th, District 11 would send 110-foot patrol boats, which serve as the District’s primary boats for drug patrols, from Alameda to areas off the southern California and Mexican shores. The District had to eliminate these patrols when the boats were reallocated for security functions. Some districts had to re-allocate personnel to specific security activities. For example, District 13 reallocated personnel from small boat stations along the Washington coast to help implement added security measures in ports in Puget Sound. District 13 staff reported that patrol boats and small boats experienced a large increase in operational hours and that Coast Guard personnel who were assigned to boat stations experienced a marked increase in work hours from 60 to 80 hours per week. Other districts reported similar strains on personnel. Although the Coast Guard drew resources from many mission areas, some areas were less negatively affected than law enforcement in continuing to meet mission requirements. For example, although the Coast Guard had to put search and rescue vessels and personnel into security roles, doing so did not negatively affect search and rescue activities or detract from saving lives, according to the Coast Guard. The main reason was that the terrorist attacks occurred when the busiest part of the search and rescue season was essentially over. In addition, during the initial response, there were no major storms and the weather was warmer, requiring less icebreaker services, search and rescue calls, and oil tanker escorts. In an attempt to restore capabilities in its key mission areas, the Coast Guard has begun Operation NEPTUNE SHIELD, which has a goal of performing new enhanced security missions, while at the same time returning resources to other missions such as law enforcement, search and rescue, defense readiness, and marine safety. Also, in March 2002, the Coast Guard Commandant issued guidance that instructed his Atlantic and Pacific Area Commanders to plan and manage assets and personnel for long-term, sustainable operating tempos more in line with traditional mission functions, while still maintaining heightened security. Coast Guard officials from both the Atlantic and Pacific Areas have started implementing this guidance. As a result, deepwater cutters and aircraft are returning to traditional mission allocations but are still not at pre- September 11th levels. For example, because the Atlantic and Pacific areas each continue to allocate a deepwater cutter for coastal security patrols, the amount of time that will be spent on counter-drug and marine resources patrols is still below pre-September 11th levels. While a return to the pre-September 11th activity pattern is under way for deepwater cutters, district patrol boats and small boats remain deployed closer to their post-September 11th levels. Because the Coast Guard has implemented a number of new security activities or has increased the level of normal port security activities, the Coast Guard has continued to use boats and personnel from small boats stations and other areas for security missions. These missions include performing security inspections of cargo containers and port facilities, escorting or boarding high-interest commercial vessels, escorting Navy ships and cruise ships, establishing and enforcing new security zones, and conducting harbor security patrols. To relieve or augment its current small boats now performing security functions, the Coast Guard plans to purchase 70 new homeland security response boats with supplemental funds appropriated for fiscal year 2002 and fiscal year 2003 funding. According to the Coast Guard, these new boats will increase the capabilities of existing stations at critical ports, while others will provide armed platforms for the agency’s newly established marine safety and security teams. One program, San Francisco’s sea marshal program, illustrates the continued strain occurring at local ports. This program uses armed Coast Guard personnel to board and secure steering control locations aboard high-interest vessels. Implementing this program has affected the ability of the local Coast Guard office to accomplish its traditional missions in at least two ways, according to Coast Guard officials. First, the program has created new vessel boarding training needs for the sea marshal personnel. Second, the program requires the use of Coast Guard small boats in transporting sea marshals to vessels at assigned boarding points. This means that the Coast Guard must use small boats that are also being used for such missions as search and rescue and marine environmental protection, which will require further prioritizing and balancing of missions. Similar sea marshal programs are being implemented at other ports, such as Boston and Seattle, with similar impacts on other missions. The fiscal year 2003 budget request of $7.3 billion would increase the Coast Guard’s budget by about $1.9 billion, or 36 percent, over the fiscal year 2002 budget. More than $1.2 billion of this increase is for retirement- related payments for current and future retirees, leaving an increase of about $680 million for operating expenses, capital improvements, and other expenses. Funding for operating expenses for all of the Coast Guard’s mission areas would increase from fiscal year 2002 levels. Under the Coast Guard’s allocation formula, operating expenses for marine safety and security (the mission area that includes most homeland security efforts) would have the largest percentage increase—20 percent. Increases in other mission areas would range from 12 percent to 16 percent. The fiscal year 2003 budget contains a significant amount for retirement funding. In October 2001, legislation was proposed that would fully accrue the retirement costs of Coast Guard military personnel. This legislation directs that agencies fully fund the future pension and health benefits of their current workforces. Although this proposed legislation has not been enacted, the Coast Guard prepared its fiscal year 2003 budget to comply with these requirements. Excluding the amounts for retirement costs, the fiscal year 2003 increase totals about $680 million, which represents a 13 percent increase over the Coast Guard’s fiscal year 2002 budget. About $542 million of the requested $680 million increase is for operating expenses for the Coast Guard’s mission areas. The requested amount for operating expenses represents an increase of 15 percent over fiscal year 2002 levels. These expenses include such things as pay increases and other entitlements as well as new initiatives. Pay increases and military personnel entitlements in the fiscal year 2003 budget request total about $193 million or 36 percent of the requested increase for operating expenses. This leaves $349 million for new mission-related initiatives and enhancements. As figure 4 shows, all mission areas would receive more funding than in fiscal year 2002. (Dollars in Millions) Requested 2003 Budget Authority (excluding accruals) Projected increases in operating expenses would range from a high of 20 percent for the marine safety and security mission area to a low of 12 percent for the law enforcement mission area. (See table 1.) The Coast Guard stated that the increases are intended to improve the Coast Guard’s capabilities in each respective mission area. For example, if fully funded, operating expenses for the search and rescue mission area would increase by 13 percent. According to Coast Guard officials, the Coast Guard has experienced staffing shortages, resulting in personnel working an average of 84 hours per week; therefore, if the budget request is fully funded, the Coast Guard intends to improve readiness at small boat stations by adding 138 new positions to reduce the number of hours station personnel must work each week. In line with the Coast Guard’s relatively new responsibilities for homeland security, the marine safety and security area would receive the largest portion of the operating expenses increase. The levels of funding requested for the maritime security area would allow the Coast Guard to continue and enhance homeland security functions, begun in 2002, aimed at improving the security of the nation’s ports, waterways, and maritime borders. New security initiatives to be undertaken in fiscal year 2003 include programs to build maritime domain awareness, ensure controlled movement of high-interest vessels, enhance presence and response capabilities, protect critical infrastructure, enhance Coast Guard force protection, and increase domestic and international outreach. For example, to enhance presence and response capabilities, the Coast Guard intends to spend $12.7 million to establish two additional deployable maritime safety and security teams, which are mobile law enforcement and security specialists that can be used in various regions during times of heightened risk. These teams would be added to the four teams already established with funds from the fiscal year 2002 supplemental appropriation. Other new security initiatives would largely be funded from the operating expenses appropriation. Table 2 provides a detailed breakdown of the cost of each of the proposed security measures. While the fiscal year 2003 budget request provides funding increases for every mission area, these increases alone may not return all of its missions to levels that existed prior to September 11th. The Coast Guard faces other daunting budget and management challenges and unknowns as it strives to achieve its new mission priorities and maintain its core missions at desired levels. The most serious challenges are as follows: The Coast Guard is now at or near its maximum sustainable operating capacity in performing its missions. The agency has a finite set of cutters, boats, and aircraft to use in performing its missions, and according to Coast Guard officials, these assets, particularly the cutters, are now being operated at their maximum capabilities. In fact, officials in some districts we visited said that some of the patrol boats and small boats are operating at 120 to 150 percent of the levels they normally operate. Significantly increasing the numbers of its cutters, boats, and aircraft is not feasible in the short term. Adding new deepwater cutters and aircraft, for example, is years away as are new motor lifeboats to replace the aging 41-foot boats, which have been the mainstay of harbor security patrols in recent months. Also, according to officials in various Coast Guard units, many personnel are also working long hours even now, six months after the terrorist attacks. The Coast Guard does not yet know the level of resources required for its “new normalcy”—the level of security required in the long term to protect the nation’s major ports and its role in overseeing these levels. Until the Coast Guard completes comprehensive vulnerability assessments at major U.S. ports and the Congress decides whether or not to enact proposed port security legislation, the Coast Guard cannot define the level of resources needed for its security mission. Also, the full extent of the demands on its resources to deal with all of its missions may not have been fully tested. In terms of its ability to respond to port security functions, the Coast Guard was fortunate in the timing of the terrorist attacks. For example, the busiest part of the search and rescue season was essentially over, and the agency was able to redeploy search and rescue boats from stations during the off-season to perform harbor security functions. The cruise ship season was over in many locations, requiring fewer Coast Guard escorts for these vessels. There were no major storms, and the weather has been warmer—requiring less icebreaking services, search and rescue calls, and oil tanker escorts. Also, there were no major security incidents in our nation’s ports. A major change in any or a series of these events could mean major adjustments in mission priorities and performance. The Coast Guard faces a host of human capital challenges in managing its most important resource—its people. Even before September 11, 2001, the Coast Guard saw signs of needed reform in its human resources policies and practices. Attrition rates among military and civilian employees are relatively high, and about 28 percent of the agency’s civilian employees are eligible to retire within the next five years. Budget constraints during the last decade had led to understaffing and training deficiencies in some program areas. For example, a recent study of the Coast Guard’s small boat stations showed that the agency’s search and rescue program is understaffed, personnel often work over 80 hours each week, and many staff are not fully trained. All of these challenges have been exacerbated by new challenges added since September 11th. As a result of its new emphasis on homeland security, the Coast Guard plans to hire over 2,200 new full-time positions to its workforce and increase its pool of reservists by 1,000 if its funding request is approved—putting added strain on its recruiting and retention efforts. While the Coast Guard has embarked on a strategy to address these issues, many of its human capital initiatives are yet to be developed or implemented. Other needs that have been put on the “back burner” in the fiscal year 2003 request may require increased attention—some rather soon. For example, sizeable capital improvements for shore facilities may be required in the near future, and required funding for this purpose could be considerable. For example, it appears that the agency reduced the fiscal year 2003 budget request for this budget item to fund other priorities. In last year’s capital plan, the Coast Guard estimated that $66.4 million would be required in fiscal year 2003 for shore facilities and aids to navigation. However, the fiscal 2003 budget request seeks only $28.7 million, a significant disparity from last year’s estimate. Other priorities, such as funding for the Deepwater Project and the National Distress System, will consume much of the funding available for its capital projects for years to come. Coast Guard officials said that while they still face the need for significant capital projects at their shore facilities, they are taking steps in the fiscal year 2003 budget request to improve the agency’s maintenance program in an effort to forestall the need for capital projects at these facilities. In conclusion, to its credit, the Coast Guard has assumed its homeland security functions in a stellar manner through the hard work and dedication of its people. It has had to significantly adjust its mission priorities, reposition and add to its resources, and operate at an intense pace to protect our nation’s ports. Now, six months after the terrorist attacks, the agency is still seeking to define a “new normalcy”—one that requires a new set of priorities and poses new challenges. By seeking increases in each of the agency’s mission areas, the fiscal year 2003 budget request is an attempt to provide the Coast Guard with the resources needed to operate within this environment. But particularly in the short term, increased funding alone is not necessarily the answer and is no guarantee that key Coast Guard missions and priorities will be achieved. In fact, because of the formidable challenges the Coast Guard faces today— particularly the finite numbers of cutters, boats, and aircraft it has available in the short run and its significant human capital issues—the Coast Guard will likely have to continue to make significant trade-offs and shifts among mission areas until it develops clear strategies to address its new mission environment. Mr. Chairman, this concludes my testimony. I will be happy to respond to any questions you or other Members of the Subcommittee may have. | Like many federal agencies, the Coast Guard's priorities were dramatically altered by the events of September 11. The Coast Guard has requested $7.3 billion for fiscal year 2003--a 36 percent increase from the previous year. The events of September 11 caused a substantial shift of effort toward homeland security and away from other missions. As resources were shifted to meet these needs, the law enforcement mission area, which consists mainly of drug and migrant interdiction and fisheries enforcement, saw a dramatic drop in mission capability. The Coast Guard's fiscal year 2003 budget request reflects an attempt to maintain and enhance heightened levels of funding for homeland security while also increasing funding for all other Coast Guard missions beyond fiscal year 2002 levels. The Coast Guard faces substantial management challenges in translating its requested funding increases into increased service levels in its key mission areas. For example, workforce issues present a daunting challenge. If the budget request for fiscal year 2003 is approved, the Coast Guard will add 2,200 full-time positions, retain and build on the expertise and skills of its current workforce, and deal with already high attrition rates and looming civilian retirements. The Coast Guard has yet to determine the long-term level of security needed to protect the nation's major ports. These challenges mean that, in the short term, additional funding may not increase the Coast Guard's ability to carry out its missions. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Information security is a critical consideration for any organization that depends on information systems and computer networks to carry out its mission or business. It is especially important for government agencies, where the public’s trust is essential. The dramatic expansion in computer interconnectivity and the rapid increase in the use of the Internet are changing the way our government, the nation, and much of the world communicate and conduct business. Without proper safeguards, systems are unprotected from individuals and groups with malicious intent who can intrude and use their access to obtain sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks. These concerns are well founded for a number of reasons, including the dramatic increase in reports of security incidents, the ease of obtaining and using hacking tools, and steady advances in the sophistication and effectiveness of attack technology. Our previous reports, and those by the Treasury Inspector General for Tax Administration (TIGTA), describe persistent information security weaknesses that place federal agencies, including IRS, at risk of disruption, fraud, or inappropriate disclosure of sensitive information. Recognizing the importance of securing federal agencies’ information systems, Congress enacted the Federal Information Security Management Act (FISMA) in December 2002 to strengthen the security of information and systems within federal agencies. FISMA requires each agency to develop, document, and implement an agencywide information security program to provide information security for the information and systems that support the operations and assets of the agency, using a risk-based approach to information security management. Such a program includes developing and implementing security plans, policies, and procedures; testing and evaluating the effectiveness of controls; assessing risk; providing specialized training; planning, implementing, evaluating, and documenting remedial action to address information security deficiencies; and ensuring continuity of operations. We have designated information security as a governmentwide high-risk area since 1997—a designation that remains in force today. IRS has demanding responsibilities in collecting taxes, processing tax returns, and enforcing the nation’s tax laws. It relies extensively on computerized systems to support its financial and mission-related operations. In fiscal year 2006, IRS collected about $2.5 trillion in tax payments, processed hundreds of millions of tax and information returns, and paid about $277 billion in refunds to taxpayers. IRS is a large and complex organization, adding unique mission operational challenges for management. It employs tens of thousands of people in 10 service center campuses, 3 computing centers, and numerous other field offices throughout the United States. IRS also collects and maintains a significant amount of personal and financial information on each American taxpayer. The confidentiality of this sensitive information must be protected; otherwise, taxpayers could be exposed to loss of privacy and to financial loss and damages resulting from identity theft or other financial crimes. The Commissioner of Internal Revenue has overall responsibility for ensuring the confidentiality, availability, and integrity of the information and information systems that support the agency and its operations. FISMA states that the Chief Information Officer (CIO) is responsible for developing and maintaining an information security program. Within IRS, this responsibility is delegated to the Chief of Mission Assurance and Security Services (MA&SS). The Chief of MA&SS is responsible for developing policies and procedures regarding information technology security; establishing a security awareness and training program; conducting security audits; coordinating the implementation of logical access controls into IRS systems and applications; providing physical and personnel security; and, among other things, monitoring IRS security activities. To help accomplish these goals, MA&SS has developed and published information security policies, guidelines, standards, and procedures in the Internal Revenue Manual, the Law Enforcement Manual, and other documents. The Modernization and Information Technology Services organization, led by the CIO, is responsible for developing security controls for systems and applications; conducting annual tests of systems; implementing, testing, and validating the effectiveness of remedial actions; ensuring that continuity of operations requirements are addressed for all applications and systems it owns; and mitigating technical vulnerabilities and validating the mitigation strategy. The objectives of our review were to determine (1) the status of IRS’s actions to correct or mitigate previously reported weaknesses at two data processing sites and (2) whether controls over key financial and tax processing systems located at three sites were effective in ensuring the confidentiality, integrity, and availability of financial and sensitive taxpayer information. This review was completed to support the annual financial statement audit, by assessing the effectiveness of information system controls for the purposes of supporting our opinion on internal controls over the preparation of the financial statements. We concentrated our evaluation primarily on threats emanating from sources internal to IRS’s computer networks. Our evaluation was based on (1) our Federal Information System Controls Audit Manual, which contains guidance for reviewing information system controls that affect the confidentiality, integrity, and availability of computerized information; (2) FISMA, which establishes key elements that are required for an effective agencywide information security program; and (3) previous reports from TIGTA and GAO. Specifically, we evaluated information security controls that are intended to prevent, limit, and detect electronic access to computer resources (information, programs, and systems), thereby protecting these resources against unauthorized disclosure, modification, and use; provide physical protection of computer facilities and resources from espionage, sabotage, damage, and theft; prevent the exploitation of security vulnerabilities; prevent the introduction of unauthorized changes to application or system ensure that work responsibilities for computer functions are segregated so that one individual does not perform or control all key aspects of computer-related operations and, thereby, have the ability to conduct unauthorized actions or gain unauthorized access to assets or records without detection; provide confidentiality of used media; and limit the disruption to a system due to the intentional or unintentional actions of individuals. In addition, we evaluated IRS’s agencywide information security program. We identified and reviewed pertinent IRS information security policies and procedures, guidance, security plans, relevant reports, and other documents. We also tested the effectiveness of information security controls at three IRS sites. We focused on five critical applications that directly or indirectly support the processing of material transactions that are reflected in the agency’s financial statement. These applications are used for procurement, asset management, and tax administration, which are located at the sites. We also discussed with key security representatives and management officials whether information security controls were in place, adequately designed, and operating effectively. IRS has made limited progress toward correcting previously reported information security weaknesses at two data processing sites. Specifically, it has corrected or mitigated 25 of the 73 weaknesses that we reported as unresolved at the time of our last review. IRS corrected weaknesses related to access controls and configuration management, among others. For example, it has made progress in implementing controls used to authorize access to Windows systems, network devices, databases, and mainframe systems by, among other things, removing administrative privileges from Windows users who did not need them to perform job duties; securely configuring the protocol used for managing network performance; improving control over data sharing among mainframe users; and restricting a certain access privilege to mainframe users who did not need it to perform their job duties; improved password controls on its servers by installing a password filter on Windows systems requiring users to create passwords in accordance with IRS policy, discontinuing the use of stored passwords in clear text for automatic logon files and structured query language scripts, and requiring password complexity and stronger password expiration policies on Windows systems; enhanced audit and monitoring efforts for mainframe and Windows user conducted a facility risk assessment at a critical data processing site; and improved change controls over one of IRS’s mainframe systems. In addition, IRS has made progress in enhancing its information security program. For example, IRS has trained its staff to restore operations in the event of an emergency at an off-site location, assessed risks for the systems we reviewed, certified and accredited the systems we reviewed, enhanced information security awareness and training by providing training to employees and contractors, and established an ongoing process of testing and evaluating its systems to ensure compliance with policies and procedures. Although IRS has made progress in correcting many of the previously identified security weaknesses, 48 weaknesses (66 percent) remain unresolved. For example, IRS continued to, among other things, use inadequate account lockout settings for Windows servers, improperly restrict file permissions on UNIX systems, routinely permit unencrypted protocols for remote logon capability to servers, insufficiently monitor system activities and configure certain servers to ensure adequate audit trails, inadequately verify employees’ identities against official IRS photo identification, use an ineffective patch management program, and use disaster recovery plans that did not include disaster recovery procedures for certain mission-critical systems. Significant weaknesses in access controls and other information security controls continue to threaten the confidentiality, integrity, and availability of IRS’s financial and tax processing systems and information. A primary reason for these weaknesses is that IRS has not yet fully implemented its information security program. As a result, IRS’s ability to perform vital functions could be impaired and the risk of unauthorized disclosure, modification, or destruction of financial and sensitive taxpayer information is increased. A basic management objective for any organization is to protect the resources that support its critical operations from unauthorized access. Organizations accomplish this objective by designing and implementing controls that are intended to prevent, limit, and detect unauthorized access to computing resources, programs, information, and facilities. Inadequate access controls diminish the reliability of computerized information and increase the risk of unauthorized disclosure, modification, and destruction of sensitive information and disruption of service. Access controls include those related to user identification and authentication, authorization, cryptography, audit and monitoring, and physical security. IRS did not ensure that it consistently implemented effective access controls in each of these areas, as the following sections in this report demonstrate. A computer system must be able to identify and authenticate different users so that activities on the system can be linked to specific individuals. When an organization assigns unique user accounts to specific users, the system is able to distinguish one user from another—a process called identification. The system also must establish the validity of a user’s claimed identity by requesting some kind of information, such as a password, that is known only by the user—a process known as authentication. According to IRS policy, user accounts will be associated with only one individual or process and should automatically lockout after three consecutive failed logon attempts. If user accounts are not associated with an individual (e.g., group user accounts), they must be controlled, audited, and managed. In addition, IRS policy requires strong enforcement of passwords for authentication to IRS systems. For example, passwords are to expire and are not to be shared by users. IRS did not adequately control the identification and authentication of users to ensure that only authorized individuals were granted access to its systems. For example, administrators at one site shared logon accounts and passwords when accessing a database production server for the procurement system. By allowing users to share accounts and passwords, individual accountability for authorized system activity as well as unauthorized system activity could be lost. In addition, at the same site, IRS did not enforce strong password management on the same database production server. Accounts did not lock out users after failed logon attempts and passwords did not expire. As a result, the database was susceptible to a brute force password attack that could result in unauthorized access. Furthermore, at another site, IRS stored user IDs and passwords in mainframe files that could be read by every mainframe user. As a result, increased risk exists that an intruder or unauthorized user could read and use these IDs and passwords to log on to the computer systems and masquerade as an authorized user. Authorization is the process of granting or denying access rights and permissions to a protected resource, such as a network, a system, an application, a function, or a file. A key component of granting or denying access rights is the concept of “least privilege.” Least privilege is a basic principle for securing computer resources and information. This principle means that users are granted only those access rights and permissions that they need to perform their official duties. To restrict legitimate users’ access to only those programs and files that they need to do their work, organizations establish access rights and permissions. “User rights” are allowable actions that can be assigned to users or to groups of users. File and directory permissions are rules that regulate which users can access a particular file or directory and the extent of that access. To avoid unintentionally authorizing users access to sensitive files and directories, an organization must give careful consideration to its assignment of rights and permissions. IRS policy requires that all production systems be securely configured to specifically limit access privileges to only those individuals who need them to perform their official duties. IRS permitted excessive access to key financial systems by granting rights and permissions that gave users more access than they needed to perform their official duties. For example, at one site, excessive read access was allowed to production system libraries that contained mainframe configuration information. In addition, this site did not maintain documentation of approved access privileges allowed to each system resource by each user group. Without such documentation, IRS limits its ability to monitor and verify user access privileges. Furthermore, IRS did not appropriately restrict the use of anonymous e-mails on the two mainframe systems we reviewed. These servers allowed anonymous e-mails from one of our analysts masquerading as a legitimate sender and could expose IRS employees to malicious activity, including phishing. At another site, IRS granted all users excessive privileges to sensitive files on its production database server for the procurement system. Additionally, the procurement system was vulnerable to a well-known exploit whereby database commands could be inserted into the application through a user input screen that was available to everyone on the agency’s network. Administrative privileges also were granted to the procurement system’s database application user ID at this location. This user ID allowed extensive administrative privileges that were inappropriate for this type of account. Excessive or unauthorized access privileges provide opportunities for individuals to circumvent security controls. Cryptography underlies many of the mechanisms used to enforce the confidentiality and integrity of critical and sensitive information. A basic element of cryptography is encryption. Encryption can be used to provide basic data confidentiality and integrity, by transforming plain text into cipher text using a special value known as a key and a mathematical process known as an algorithm. IRS policy requires the use of encryption for transferring sensitive but unclassified information between IRS facilities. The National Security Agency also recommends disabling protocols that do not encrypt information transmitted across the network, such as user ID and password combinations. IRS did not consistently apply encryption to protect sensitive data traversing its network. For example, at one site, IRS was using an unencrypted protocol to manage network devices on a local server. In addition, the procurement application and the UNIX servers we reviewed at another site were using unencrypted protocols. Therefore, all information, including user ID and password information, was being sent across the network in clear text. These weaknesses could allow an attacker to view information and use that knowledge to gain access to financial and system data being transmitted over the network. To establish individual accountability, monitor compliance with security policies, and investigate security violations, it is crucial to determine what, when, and by whom specific actions have been taken on a system. Organizations accomplish this by implementing system or security software that provides an audit trail, or logs of system activity, that they can use to determine the source of a transaction or attempted transaction and to monitor users’ activities. The way in which organizations configure system or security software determines the nature and extent of information that can be provided by the audit trail. To be effective, organizations should configure their software to collect and maintain audit trails that are sufficient to track security-relevant events. The Internal Revenue Manual requires that auditable events be captured and audit logs be used to review what occurred after an event, for periodic reviews, and for real-time analysis. In addition, the manual requires that audit logs be maintained and archived in a way that allows for efficient and effective retrieval, viewing, and analysis, and that the logs be protected from corruption, alteration, or deletion. IRS did not consistently audit and monitor security-relevant system activity on its applications. According to IRS officials, IRS did not capture auditable events for its procurement application as a result of system performance issues. Therefore, no audit reports were being reviewed by managers for this application. In addition, IRS was unable to effectively monitor activity for its administrative financial system because the volume of the information in the log made it difficult for IRS officials to systematically analyze targeted activities and security-relevant events or archive logs. As a result, unauthorized access could go undetected, and the agency’s ability to trace or recreate events in the event of a system modification or disruption could be diminished. Physical access controls are used to mitigate the risks to systems, buildings, and supporting infrastructure related to their physical environment and to control the entry and exit of personnel in buildings as well as data centers containing agency resources. Examples of physical security controls include perimeter fencing, surveillance cameras, security guards, and locks. Without these protections, IRS computing facilities and resources could be exposed to espionage, sabotage, damage, and theft. IRS policy states that only authorized personnel should have access to IRS buildings and structures. Although IRS has implemented physical security controls over its information technology resources, certain weaknesses reduce the effectiveness of these controls. For example: IRS did not physically protect a server containing source code for its procurement application. The server was not located in a secured computer room; instead, it was located in a cubicle. IRS did not consistently manage the use of proximity cards, which are used to gain access to secured IRS facilities. For example, one of the sites we visited could not account for active proximity cards for at least 11 separated employees. At that same site, at least 12 employees and contractors were given proximity cards that allowed them access to a computer room, although these individuals did not need this access to perform their official duties. IRS did not always effectively secure certain restricted areas. For example, it implemented motion detectors at one site to release the locks on doors that lead from areas that are accessible by the general public directly into IRS-controlled areas. The motion detector’s field of view was set wider than necessary, so that an unauthorized individual would simply have to wait for an authorized individual to pass by the motion detector on the IRS-controlled side of the door to gain unauthorized access to the IRS facility. As a result, IRS is at increased risk of unauthorized access to financial information and inadvertent or deliberate disruption of procurement services. In addition to access controls, other important controls should be in place to ensure the confidentiality, integrity, and availability of an organization’s information. These controls include policies, procedures, and techniques for securely configuring information systems, segregating incompatible duties, sufficiently disposing of media, and implementing personnel security. Weaknesses in these areas increase the risk of unauthorized use, disclosure, modification, or loss of IRS’s information and information systems. The purpose of configuration management is to establish and maintain the integrity of an organization’s work products. By implementing configuration management, organizations can better ensure that only authorized applications and programs are placed into operation through establishing and maintaining baseline configurations and monitoring changes to these configurations. According to IRS policy, changes to baseline configurations should be monitored and controlled. Patch management, a component of configuration management, is an important factor in mitigating software vulnerability risks. Proactively managing vulnerabilities of systems will reduce or eliminate the potential for exploitation and involves considerably less time and effort than responding after an exploit has occurred. Up-to-date patch installation can help diminish vulnerabilities associated with flaws in software code. Attackers often exploit these flaws to read, modify, or delete sensitive information; disrupt operations; or launch attacks against other organizations’ systems. According to the National Institute of Standards and Technology (NIST), tracking patches allows organizations to identify which patches are installed on a system and provides confirmation that the appropriate patches have been applied. IRS’s patch management policy also requires that patches be implemented in a timely manner, and that critical patches are applied within 72 hours to minimize vulnerabilities. IRS did not properly implement configuration management procedures. For example, IRS did not record successful changes to baseline configurations on one of its mainframe systems, which supports its general ledger for tax administration activities. Without adequately logging system configuration changes, IRS cannot adequately ensure they are properly monitored and controlled. In addition, IRS did not effectively track or install patches in a timely manner. For example, one IRS location did not have a tracking process in place to ensure that up-to-date patches have been applied on UNIX servers. Furthermore, installation of critical patches through the configuration management process for Windows systems was not timely. For example, critical Windows patches released in July 2006 had not yet been applied at the time of our review in August 2006. As a result, increased risk exists that the integrity of IRS systems could be compromised. Segregation of duties refers to the policies, procedures, and organizational structures that help ensure that no single individual can independently control all key aspects of a process or computer-related operation and thereby gain unauthorized access to assets or records. Often, organizations achieve segregation of duties by dividing responsibilities among two or more individuals or organizational groups. This diminishes the likelihood that errors and wrongful acts will go undetected, because the activities of one individual or group will serve as a check on the activities of the other. Inadequate segregation of duties increases the risk that erroneous or fraudulent transactions could be processed, improper program changes implemented, and computer resources damaged or destroyed. The Internal Revenue Manual requires that IRS divide and separate duties and responsibilities of incompatible functions among different individuals, so that no individual shall have all of the necessary authority and system access to disrupt or corrupt a critical security process. IRS did not always properly segregate incompatible duties. For example, IRS established test accounts on a production server for its procurement system. Test accounts are used by system developers and are not typically found on production servers. Allowing test accounts on production servers creates the potential for individuals to perform incompatible functions, such as system development and production support. Granting this type of access to individuals who do not require it to perform their official duties increases the risk that sensitive information or programs could be improperly modified, disclosed, or deleted. Media destruction and disposal is a key to ensuring confidentiality of information. Media can include magnetic tapes, optical disks (such as compact disks), and hard drives. Organizations safeguard used media to ensure that the information it contains is appropriately controlled. Improperly disposed media can lead to the inappropriate or inadvertent disclosure of an agency’s sensitive information or the personally identifiable information of its employees and customers. This potential vulnerability can be mitigated by properly sanitizing the media. According to IRS policy, all media should be sanitized prior to disposal in such a manner that sensitive information on that media cannot be recovered by ordinary means. The policy further requires that IRS maintain records certifying that sanitation was performed. IRS did not have an appropriate process for disposing of information stored on optical disk. According to agency officials at one of the sites we visited, discarded optical disks were left unattended in a hallway bin awaiting destruction by the cleaning staff. These disks had not been sanitized, and IRS staff were unaware if the unattended disks contained sensitive information. Furthermore, the cleaning staff did not maintain records certifying that the media were destroyed. As a result, IRS could not ensure the confidentiality of potentially sensitive information stored on optical disks marked for destruction. The greatest harm or disruption to a system comes from the actions, both intentional and unintentional, of individuals. These intentional and unintentional actions can be reduced through the implementation of personnel security controls. According to NIST, personnel security controls help organizations ensure that individuals occupying positions of responsibility (including third-party service providers) are trustworthy and meet established security criteria for those positions. Organizations should also ensure that information and information systems are protected during and after personnel actions, such as terminations and transfers. Organizations can decrease the risk of harm or disruption of systems by implementing personnel security controls associated with personnel screening and termination. Personnel screening controls should be implemented when an individual requires access to facilities, information, and information systems before access is authorized. Organizations should also implement controls for when employment is terminated, including ceasing information system access and ensuring the return of organizational information system-related property (e.g., ID cards or building passes). According to the Internal Revenue Manual, contractor employees must complete a background investigation to be granted on-site, staff-like access to IRS facilities. However, if a background investigation has not been completed, individuals may not have access to IRS sensitive areas unless they are escorted by an IRS employee. The manual further states that managers are responsible for identifying separated employees in order to recover IRS assets, such as ID media. Separated employees’ accounts are to be deactivated within 1 week of an individual’s departure on friendly terms and immediately upon an individual’s departure on unfriendly terms. IRS did not always ensure the effective implementation of its personnel security controls. For example, at two sites, IRS granted contractors who did not have a completed background investigation unescorted physical access to sensitive areas. In addition, at all three sites we reviewed, IRS did not appropriately remove application access for separated personnel. For example, 19 individuals who had separated from IRS for periods ranging from 3 weeks to 14 months still maintained access to applications during our review this year. These practices increase the risk that individuals might gain unauthorized access to IRS resources. A key reason for the information security weaknesses in IRS’s financial and tax processing systems is that it has not yet fully implemented its agencywide information security program to ensure that controls are effectively established and maintained. FISMA requires each agency to develop, document, and implement an information security program that, among other things, includes periodic assessments of the risk and the magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information and information systems; policies and procedures that (1) are based on risk assessments, (2) cost- effectively reduce risks, (3) ensure that information security is addressed throughout the life cycle of each system, and (4) ensure compliance with applicable requirements; plans for providing adequate information security for networks, facilities, security awareness training to inform personnel of information security risks and of their responsibilities in complying with agency policies and procedures, as well as training personnel with significant security responsibilities for information security; at least annual testing and evaluation of the effectiveness of information security policies, procedures, and practices relating to management, operational, and technical controls of every major information system that is identified in the agency’s inventories; a process for planning, implementing, evaluating, and documenting remedial action to address any deficiencies in its information security policies, procedures, or practices; and plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. Although IRS continued to make important progress in developing and documenting a framework for its information security program, key components of the program had not been fully or consistently implemented. Identifying and assessing information security risks are essential to determining what controls are required. Moreover, by increasing awareness of risks, these assessments can generate support for the policies and controls that are adopted in order to help ensure that these policies and controls operate as intended. The Office of Management and Budget (OMB) Circular A-130, appendix III, prescribes that risk be reassessed when significant changes are made to computerized systems— or at least every 3 years. Consistent with NIST guidance, IRS requires its risk assessment process to detail the residual risk assessed and potential threats, and to recommend corrective actions for reducing or eliminating the vulnerabilities identified. Although IRS had implemented a risk assessment process, it did not always effectively evaluate potential risks for the systems we reviewed. IRS has reassessed the risk level for each of its 264 systems and categorized them on the basis of risk. Furthermore, the five risk assessments that we reviewed were current, documented residual risk assessed and potential threats, and recommended corrective actions for reducing or eliminating the vulnerabilities they identified. However, IRS did not identify many of the vulnerabilities in this report and did not assess the risks associated with them. As a result, potential risks to these systems may be unknown. Another key element of an effective information security program is to develop, document, and implement risk-based policies, procedures, and technical standards that govern security over an agency’s computing environment. If properly implemented, policies and procedures should help reduce the risk that could come from unauthorized access or disruption of services. Technical security standards provide consistent implementation guidance for each computing environment. Developing, documenting, and implementing security policies is important because they are the primary mechanisms by which management communicates its views and requirements; these policies also serve as the basis for adopting specific procedures and technical controls. In addition, agencies need to take the actions necessary to effectively implement or execute these procedures and controls. Otherwise, agency systems and information will not receive the protection that the security policies and controls should provide. Although IRS has developed and documented information security policies, standards, and guidelines that generally provide appropriate guidance to personnel responsible for securing information and information systems, it did not always provide needed guidance on how to guard against significant mainframe security weaknesses. For example, IRS policy lacked guidance on how to correctly configure certain mainframe IDs used by the operating system and certain powerful mainframe programs used to control processing. As a result, IRS has reduced assurance that its systems and the information they contain are sufficiently protected. An objective of system security planning is to improve the protection of information technology resources. A system security plan provides an overview of the system’s security requirements and describes the controls that are in place—or planned—to meet those requirements. OMB Circular A-130 requires that agencies develop system security plans for major applications and general support systems, and that these plans address policies and procedures for providing management, operational, and technical controls. IRS had developed system security plans for four of the five systems we reviewed. The plans addressed policies and procedures for providing management, operational, and technical controls. However, IRS had not developed a system security plan for the system that supports its general ledger for tax administration activities. As a result, IRS cannot ensure that appropriate controls are in place to protect this key financial system and critical information. People are one of the weakest links in attempts to secure systems and networks. Therefore, an important component of an information security program is providing required training so that users understand system security risks and their own role in implementing related policies and controls to mitigate those risks. IRS policy mandates that personnel with significant security responsibilities be provided with specialized training. In addition, IRS policy requires that personnel performing information technology security duties meet minimum continuing professional education levels in accordance with their roles. Specifically, personnel performing technical security roles are required to have 24 hours of specialized training per year, personnel performing nontechnical roles are required to have 16 hours of specialized training per year, and personnel performing executive security roles should have 6 hours of specialized training per year. IRS policy also requires that effective tracking and reporting mechanisms be in place to monitor specialized training. Although IRS has made significant progress in providing security personnel with job-related training and established a methodology for identifying employees with significant security responsibilities, in fiscal year 2006, at least 95 individuals with significant security responsibilities did not have the minimum number of hours of specialized training required by IRS policy. Of those 95 individuals, 18 had not completed any training for the last reporting year. In addition, IRS was not able to determine whether all of its employees had met minimum continuing professional education requirements. For example, IRS monitored employee training through its Enterprise Learning Management System, but the system could not differentiate between employees who are required to have only 6 hours of training and employees who are required to have more. Furthermore, IRS did not track all security-related training courses taken by its employees. These conditions increase the risk that employees and contractors may not be aware of their security responsibilities. Another key element of an information security program is to test and evaluate policies, procedures, and controls to determine weather they are effective and operating as intended. This type of oversight is a fundamental element because it demonstrates management’s commitment to the security program, reminds employees of their roles and responsibilities, and identifies and mitigates areas of noncompliance and ineffectiveness. Although control tests and evaluations may encourage compliance with security policies, the full benefits are not achieved unless the results improve the security program. FISMA requires that the frequency of tests and evaluations be based on risks and occur no less than annually. IRS policy also requires periodic testing and evaluation of the effectiveness of information security policies and procedures. IRS tested and evaluated information security controls for each of the systems we reviewed. However, these evaluations did not address many of the vulnerabilities we have identified in this report. For example, IRS’s test and evaluation plan for its procurement system did not include tests for password expiration, insecure protocols, or the removal of employees’ system access after separation from the agency. As a result, IRS has limited assurance that it has appropriately implemented controls, and it will be less able to identify needed controls. A remedial action plan is a key component described in FISMA. Such a plan assists agencies in identifying, assessing, prioritizing, and monitoring progress in correcting security weaknesses that are found in information systems. According to IRS policy, the agency should document weaknesses found during security assessments as well as document any planned, implemented, and evaluated remedial actions to correct any deficiencies. The policy further requires that IRS track the status of resolution of all weaknesses and verify that each weakness is corrected. IRS has developed and implemented a remedial action process to address deficiencies in its information security policies, procedures, and practices, however, this remedial action process was not working as intended. For example, the verification process used to determine whether remedial actions were implemented was not always effective. Of the 73 previously reported weaknesses, IRS had indicated that it had corrected or mitigated 57 of them. However, of those 57 weaknesses, 33 still existed at the time of our review. In addition, IRS had identified weaknesses but did not document them in a remedial action plan. For example, we reviewed system self-assessments for five systems and identified at least 8 weaknesses not documented in a remedial action plan. These weaknesses pertained to system audit trails, approval and distribution of continuity of operations plans, and documenting emergency procedures. TIGTA also reported that IRS was not tracking all weaknesses found during security assessments in 2006. As a result, increased risk exists that known vulnerabilities will not be mitigated. IRS did not proactively ensure that weaknesses found at one of its facilities or on one of its systems were considered and, if necessary, corrected at other facilities or on similar systems. Many of the issues identified in this report were previously reported at other locations and on similar systems. Yet, IRS had not applied those recommendations to the facilities and systems we reviewed this year. For example, we have been identifying weaknesses with encryption at IRS since 1998. However, IRS was not using encryption to protect information traversing its network. In addition, in 2002 we recommended that IRS promptly remove system access for separated employees and verify that system access has been removed. Nevertheless, IRS did not promptly remove system access for separated employees. Recognizing the need for a servicewide solution, IRS developed a plan in October 2006 to address many of the recurring weaknesses. This plan includes remedial actions to address various weaknesses such as access authorization, audit and monitoring, configuration management, and testing of technical controls. According to IRS, the plan should be fully implemented by fiscal year 2012. However, until IRS fully implements its plan to address recurring weaknesses, it may not be able to adequately protect its information and information systems from inadvertent or deliberate misuse, fraudulent use, improper disclosure, or destruction. Continuity of operations planning is a critical component of information protection. To ensure that mission-critical operations continue, it is necessary to be able to detect, mitigate, and recover from service disruptions while preserving access to vital information. The elements of robust continuity of operations planning include, among others, identifying preventative controls (e.g., environmental controls); developing recovery strategies, including alternative processing locations; and performing disaster recovery exercises to test the effectiveness of continuity of operations plans. According to NIST, systems need to have a reasonably well-controlled operating environment, and failures in environmental controls such as air-conditioning systems may cause a service interruption and may damage hardware. IRS policy mandates that an alternate processing site be identified, and that agreements be in place when the primary processing capabilities are unavailable. The policy further requires that each application’s recovery plan be tested on a yearly basis. IRS did not have adequate environmental controls at one of the sites we visited. For example, the air-conditioning system for the computer room that houses the procurement system could not adequately cool down the systems in the room and was supplemented by a portable fan. In addition, the fire extinguishers for the same room had not had an up-to-date inspection. Without providing adequate environmental controls, IRS is at increased risk that critical system hardware may be damaged. Also, IRS had established alternate processing sites for four of the five applications we reviewed. However, it did not have an alternate processing site for its procurement system, and it had not tested the application’s recovery plan. As a result, unforeseen events could significantly impair IRS’s ability to fulfill its mission. IRS has made important progress in correcting or mitigating previously reported weaknesses, implementing controls over key financial and tax processing systems, and developing and documenting a solid framework for its agencywide information security program. However, information security weaknesses—both old and new—continue to impair the agency’s ability to ensure the confidentiality, integrity, and availability of financial and sensitive taxpayer information. These deficiencies represent a material weakness in IRS’s internal controls over its financial and tax processing systems. A key reason for these weaknesses is that the agency has not yet fully implemented critical elements of its agencywide information security program. Until IRS (1) fully implements a comprehensive agencywide information security program that includes risk assessments, enhanced policies and procedures, security plans, training, adequate tests and evaluations, and a continuity of operations process for all major systems and (2) begins to address weaknesses across the service, its facilities, computing resources, and the financial and sensitive taxpayer information on its systems will remain vulnerable. To help establish effective information security over key financial and tax processing systems, financial and sensitive taxpayer information, and interconnected networks, we recommend that you take the following 10 actions to implement an agencywide information security program: update the risk assessments for the five systems reviewed to include the vulnerabilities identified in this report; update policies and procedures to include guidance on configuring mainframe ID’s used by the operating system and certain powerful mainframe programs used to control processing; develop a system security plan for the system that supports the general ledger for tax administration activities; enhance the Enterprise Learning Management System to include all security-related training courses taken by IRS employees and contractors and to differentiate required training hours for all employees; update test and evaluation procedures to include tests for vulnerabilities identified in this report, such as password expiration, insecure protocols, and removal of system access after separation from the agency; implement a revised remedial action verification process that ensures actions are fully implemented; document weaknesses identified during security assessments in a remedial provide adequate environmental controls for the computer room that houses the procurement system, such as a sufficient air-conditioning system and up-to-date fire extinguishers; establish an alternate processing site for the procurement application; and test the procurement system recovery plan. We are also making 50 detailed recommendations in a separate report with limited distribution. These recommendations consist of actions to be taken to correct the specific information security weaknesses related to user identification and authentication, authorization, cryptography, audit and monitoring, physical security, configuration management, segregation of duties, media destruction and disposal, and personnel security. In providing written comments (reprinted in app. I) on a draft of this report, the Commissioner of Internal Revenue stated that IRS understands that information security controls are essential for ensuring information is adequately protected from inadvertent or deliberate misuse, disruption, or destruction. He also noted that IRS has taken several steps to create a strong agencywide information security program as required by FISMA. The commissioner recognized that continued diligence of IRS’s security and privacy responsibilities is required, and he further stated that IRS will continue to remedy all recommendations to completion to ensure that operations of its applications and systems adhere to security requirements. This report contains recommendations to you. As you know, 31 U.S.C. 720 requires the head of a federal agency to submit a written statement of the actions taken on our recommendations to the Senate Committee on Homeland Security and Governmental Affairs and to the House Committee on Oversight and Government Reform not later than 60 days from the date of the report and to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this report. Because agency personnel serve as the primary source of information on the status of recommendations, GAO requests that the agency also provide it with a copy of your agency’s statement of action to serve as preliminary information on the status of open recommendations. We are sending copies of this report to interested congressional committees and the Secretary of the Treasury. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact Gregory Wilshusen at (202) 512-6244 or Keith Rhodes at (202) 512-6412. We can also be reached by e-mail at [email protected] and [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. In addition to the persons named above, Don Adams, Bruce Cain, Mark Canter, Nicole Carpenter, Jason Carroll, West Coile, Denise Fitzpatrick, Edward Glagola Jr., David Hayes, Kevin Jacobi, Jeffrey Knott (Assistant Director), George Kovachick, Joanne Landesman, Leena Mathew, Kevin Metcalfe, Amos Tevelow, and Chris Warweg made key contributions to this report. | In fiscal year 2006, the Internal Revenue Service (IRS) collected about $2.5 trillion in tax payments and paid about $277 billion in refunds. Because IRS relies extensively on computerized systems, effective information security controls are essential to ensuring that financial and taxpayer information is adequately protected from inadvertent or deliberate misuse, fraudulent use, improper disclosure, or destruction. As part of its audit of IRS's fiscal years 2006 and 2005 financial statements, GAO assessed (1) IRS's actions to correct previously reported information security weaknesses and (2) whether controls were effective in ensuring the confidentiality, integrity, and availability of financial and sensitive taxpayer information. To do this, GAO examined IRS information security policies and procedures, guidance, security plans, reports, and other documents; tested controls over five critical applications at three IRS sites; and interviewed key security representatives and management officials. IRS has made limited progress toward correcting or mitigating previously reported information security weaknesses at two data processing sites, but 66 percent of the weaknesses that GAO had previously identified still existed. Specifically, IRS has corrected or mitigated 25 of the 73 information security weaknesses that GAO reported as unresolved at the time of our last review. For example, IRS has improved password controls on its servers and enhanced audit and monitoring efforts for mainframe and Windows user activity, but it continues to (1) use inadequate account lockout settings for Windows servers and (2) inadequately verify employees' identities against official IRS photo identification. Significant weaknesses in access controls and other information security controls continue to threaten the confidentiality, integrity, and availability of IRS's financial and tax processing systems and information. For example, IRS has not implemented effective access controls related to user identification and authentication, authorization, cryptography, audit and monitoring, physical security, and other information security controls. These weaknesses could impair IRS's ability to perform vital functions and increase the risk of unauthorized disclosure, modification, or destruction of financial and sensitive taxpayer information. Accordingly, GAO has reported a material weakness in IRS's internal controls over its financial and tax processing systems. A primary reason for the new and old weaknesses is that IRS has not yet fully implemented its information security program. IRS has taken a number of steps to develop, document, and implement an information security program. However, the agency has not yet fully or consistently implemented critical elements of its program. Until IRS fully implements an agencywide information security program that includes risk assessments, enhanced policies and procedures, security plans, training, adequate tests and evaluations, and a continuity of operations process for all major systems, the financial and sensitive taxpayer information on its systems will remain vulnerable. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Navy can maintain a 12-carrier force for less cost than that projected in the Bottom-Up Review (BUR) and the Navy’s Recapitalization Plan by using one of several options that consider cost and employment levels. The least expensive investment option that also maintains employment levels at or above minimum levels authorizes building the CVN-76 in fiscal year 1995 and then transitions to a conventional carrier construction program. This option costs approximately 25 percent less than the BUR and the Navy’s Recapitalization Plan options. Building CVN-76 in fiscal year 1995, as proposed by the BUR, the Navy’s Recapitalization Plan, and other options in our report (see table 1.1), stops the downward trend in Newport News Shipbuilding employment at about the minimum sustaining level of 10,000 employees. Options to delay building the carrier result in a continuing decline to about 7,500 employees. However, in the long term the employment levels in the BUR and the Navy’s Recapitalization Plan also fall below 10,000 employees. In addition, options that include building CVN-76 in fiscal year 1995 require building carriers sooner than they are needed for force structure purposes and therefore incur expenses sooner than necessary. Moreover, the option to build nuclear carriers at the historical rate of one every 3 years maintains stable employment levels but costs about 40 percent more than options in the BUR and the Navy’s Recapitalization Plan. Options for using carriers for their full service lives (options 1A and 1B) are less expensive than those in the BUR and the Navy’s Recapitalization Plan, especially if the force transitions to a conventional carrier construction program. However, in the near term, the employment levels fall below the Navy’s estimated critical minimum sustaining level of 10,000 employees. Since affordability of the future force is an important concern, a transition to constructing conventionally powered carriers would save the largest amount of investment resources (see table 1.1). A conventional carrier force structure would require less budget authority funding and fewer outlays than any force structure that continues to require building nuclear aircraft carriers. Costs are lower because all major cost elements—procurement, midlife modernization, and inactivation costs—are lower for a conventional carrier than for a nuclear carrier. Throughout the 1960s and most of the 1970s, the Navy pursued a goal of creating a fleet of nuclear carrier task forces. The centerpiece of these task forces, the nuclear-powered aircraft carrier, would be escorted by nuclear-powered surface combatants and nuclear-powered submarines. In deciding to build nuclear-powered surface combatants, the Navy believed that the greatest benefit would be achieved when all the combatant ships in the task force were nuclear powered. Nonetheless, the Navy procured the last nuclear-powered surface combatant in 1975 because this vessel was so expensive. More recently, relatively new and highly capable nuclear-powered surface combatants have been decommissioned because of the affordability problems facing the Navy. Affordability is an important, but not the only, criterion when comparing nuclear and conventional carriers. Important factors also include operational effectiveness, potential utilization, and other intangibles. Flexibility of operations, such as the ability to steam at high speeds for unlimited distances without refueling; increased capacity for aviation fuel; increased capacity for other consumables, such as munitions; and the higher speeds of the advanced nuclear carrier over conventional carriers are some of the factors that need to be considered when evaluating nuclear- and conventionally powered carriers. Other considerations include the availability and location of homeports and nuclear-capable shipyards for maintenance and repairs and other supporting infrastructure, such as for training; the effect of out-of-homeport maintenance on the amount of time personnel are away from their homeport; and the disposal of nuclear materials and radioactively contaminated materials. These issues and others will be addressed in our upcoming review on the cost-effectiveness of conventional versus nuclear carriers and submarines as mandated by the congressional conferees on the Defense Appropriations Act for 1994. Department of Defense (DOD) officials partially concurred with the results of our report. DOD agreed that affordability is an important, but not the only, criterion when comparing nuclear and conventional carriers. DOD stated that other factors, including operational effectiveness and potential utilization, need to be considered when comparing nuclear and conventional carriers. We agree, and these issues will be examined as part of our upcoming review of the cost-effectiveness of conventional versus nuclear carriers and submarines. DOD noted that we did not examine the impact of alternative investment strategies on the Newport News Shipbuilding nuclear carrier industrial base, nuclear construction skills and vendors, or the need to preserve the base. We noted those limitations to the report’s scope in our draft. Our report does reflect the employment levels resulting from the investment options, and the Navy’s comments on the likely effects of those employment curves are in our report. DOD also noted that our report compares only the investment-related cost of a nuclear-powered carrier with that of a conventionally powered carrier and not the operating and support component of total life-cycle costs, including the fuel cost. DOD stated that the potential requirement to build additional logistics support ships must be considered in the decision to build and operate a conventionally powered carrier force. As we noted in the draft report, our analysis focused on the investment-related costs of alternative procurement profile strategies. Although outside the scope of this review, we have estimated the operating and support costs of a nuclear carrier and a conventional carrier of the general type used in our investment analysis (see table 1.2). The annualized life-cycle cost of a modern fleet oiler is about $19.6 million. A recent Center for Naval Analyses study suggests that the conventional carrier’s incremental support requirements would be less than one fleet oiler per carrier. We have not verified this data. Our upcoming review will examine in greater detail the life-cycle costs of nuclear and conventional carriers, considering the incremental fuel-driven demand of conventional carriers for additional logistics support ships. The objective of the BUR strategy is to maintain a 12-carrier force, maintain the industrial base at NNS, avoid cost increases associated with a delay in construction, and preserve carrier force size flexibility. Under the BUR, the Navy would purchase CVN-76 in fiscal year 1995 consistent with a sustaining rate strategy but would shift to a replacement rate strategy beginning with CVN-77. The Navy’s Recapitalization Plan transfers resources from the Navy’s infrastructure and savings from a smaller fleet to fund the Navy’s protected major procurement accounts, including the carrier program, in order to maintain the BUR force structure and/or critical industrial capabilities. Under the Navy’s recapitalization strategy, the Navy would buy CVN-76 in fiscal year 1995 but would defer CVN-77 until fiscal year 2002 and then shift to a sustaining rate strategy of one carrier every 4 years. The BUR and the Navy’s Recapitalization Plan were analyzed to determine the effects of their strategies on the carrier force structure, financial investment requirements, and the Newport News Shipbuilding total employment level. In addition, we analyzed eight alternatives for structuring a 12-carrier force to achieve one of the following objectives: 1. Maximize budgetary savings through a carrier replacement rate strategy. This approach maximizes the carriers’ useful service lives and builds new carriers when actually needed to sustain force levels. (See the analysis and discussion of alternatives 1A and 1B.) 2. Maximize the stability of Newport News Shipbuilding (NNS) employment through a sustained rate construction and refueling/complex overhaul program. This approach requires forgoing useful service life by accelerating inactivations to maintain a sustained rate production program. (See the analysis and discussion of alternatives 2A and 2B.) 3. Optimize budgetary savings and employment level stability. This approach optimizes the service lives of nuclear carriers and provides a stable employment base. (See the analysis and discussion of alternative 3.) 4. Delay building the new carrier to defer near-term outlays and reduce overall carrier program costs. The new starts for a nuclear carrier force were planned for fiscal years 1998 and 2000 and fiscal year 2002 for a conventional carrier force. (See the analysis and discussion of alternatives 4A, 4B, and 4C.) The following discusses our analyses of DOD’s and the Navy’s baseline force structure plans and the options we developed based on the four planning objectives and force structure investment strategies. We analyzed each option’s impact on force structure and the trade-offs between budgetary requirements and overall employment levels at NNS. Under the BUR’s baseline force structure option to support a 12-carrier force (i.e., 11 active carriers and 1 operational reserve/training carrier), CVN-76 is funded in fiscal year 1995, necessitating the early retirement of the U.S.S. Kitty Hawk (CV-63). After CVN-76 the Navy plans to procure new carriers when needed to maintain force levels. This approach results in fluctuating intervals of 2 to 7 years for the construction of new carriers, but maximizes the notional 50-year service life of current and planned nuclear-powered carriers. To sustain their full 50-year service life, nuclear carriers will be refueled after approximately 23 years of service. (See fig 2.1.) Figure 2.2 shows that this option halts the rapid decline in employment at NNS at just above the 10,000-employee level— the minimum level needed to sustain the shipyard’s viability, according to the Navy. If scheduled CVN construction is delayed, the Navy stated it would, at a minimum, have to expand the number of regular overhauls at NNS and take action to preserve the nuclear component and shipbuilding industrial base. The BUR option provides a near-term solution to the employment level decline, although it may be difficult for the shipyard to economically administer the drastic shifts in the employment levels at the yard between fiscal years 1998 and 2033. Substantial declines in employment at NNS are projected to bottom out in fiscal years 1998, 2004, 2014, 2024, and 2033. The drastic decline beginning in fiscal year 2010 reduces the workforce by about 13,000, dropping total employment below the minimum level. Although DOD believes that this option is cost-effective, it totals over $4.2 billion in the short term (fiscal years 1995-99), and its cost over the long term (fiscal years 1995-2035) totals more than $56 billion. Only one option, which reduces the service life of nuclear carriers to 37 years, has larger outlays than the BUR baseline force model (see discussion of alternative 2A). The Navy’s Recapitalization Plan was developed to fulfill the requirements of the BUR. This plan calls for funding CVN-76 in fiscal year 1995 and building new nuclear carriers in 4-year intervals beginning in fiscal year 2002, as shown in figure 2.3. The plan requires that some assets be retired early to buy newer equipment. The U.S.S. Kitty Hawk (CV-63) will be retired 3 years before the end of its projected service life to maintain the 12-carrier force level when CVN-76 enters the fleet. To sustain the 4-year build interval, five other carriers will be retired early: the U.S.S. Enterprise (CVN-65) will be inactivated 2 years early, the U.S.S. Dwight D. Eisenhower (CVN-69) and the U.S.S. Carl Vinson (CVN-70) will be retired 3 years before the end of their projected service lives, and the U.S.S. Nimitz (CVN-68) and the U.S.S. Theodore Roosevelt (CVN-71) will be decommissioned 4 years early. The Navy will prematurely incur large inactivation costs, currently estimated at almost $1 billion each, for the early inactivations of these Nimitz-class carriers. The plan maintains approximately the same employment level at NNS as the BUR baseline force structure option through fiscal year 2001 (see fig. 2.4). Between fiscal years 2010 and 2034, the plan maintains an average total employment level above the projected level for the BUR option. Except for declines in total employment in fiscal years 2003-5, 2017-18, and 2029-31, this option maintains shipyard employment between 15,000 and 23,000 after fiscal year 2001 due to the consistent 4-year construction interval. Although the outlays are slightly lower than those in the BUR option in the near term (1995-99) due to a 1-year delay in CVN-77, the outlays for the mid-term (fiscal years 1995-2015) and long term (fiscal years 1995-2035) are higher than those in the BUR option due to the consistent 4-year new construction interval and the additional premature inactivations of Nimitz-class carriers. Total outlays for fiscal years 1995-2035 total almost $59 billion, about $2.5 billion higher than the cost in the BUR option. Using this force structure option, the Navy builds a new carrier only to replace a carrier that has to be inactivated at the end of its service life (see fig. 2.5). The U.S.S. Independence (CV-62) is the last carrier to be decommissioned before the end of its service life to maintain a 12-carrier force level when the U.S.S. United States (CVN-75) enters the force. All Nimitz-class carriers will use their entire projected 50-year service lives, which will require that each receive a nuclear refueling complex overhaul at 23 years. This option’s construction schedule leads to a variable build interval; construction starts may be anywhere from 3 to 10 years apart. Construction for CVN-76 begins in fiscal year 1999, and the ship will replace the U.S.S. Kitty Hawk (CV-63) in fiscal year 2006. Figures 2.5 and 2.6 show that although the Navy receives the full value of its carrier force investment, workforce management is complicated by several short-term surges in total employment and then large drop-offs because of the varying build intervals. Those changes in employment levels are similar to those in the BUR baseline force option, although the drop-off between fiscal years 1996 and 2000 under this option is much more drastic, with the employment level falling below 10,000. The workload gap could be filled by having the government direct other work to the shipyard or reschedule delivery of work under contract. Employment at the shipyard improves under this option in the mid- and long terms. Between fiscal years 2001 and 2015, the total employment level at NNS is generally at a higher level than in the BUR option. After fiscal year 2020, this option’s total employee level has fewer major shifts over the remaining 15 years of the period we analyzed than the BUR option. Since new ship construction and inactivations occur only when needed under this option, money is not outlaid prematurely for procurement and major investment costs. Outlays are less than half of those incurred under the BUR option for fiscal years 1995-99 but are only $161 million less than those between fiscal years 1995 and 2035 because, in the long term, the BUR maintains a similar replacement rate new carrier construction strategy. Outlays for this option in the long term are higher then those in the options delaying CVN-76’s construction start to fiscal years 1998 and 2000; however, in the near term, this option requires over $530 million less outlays than the option that builds CVN-76 in fiscal year 1998 due to the additional 1-year delay in CVN-76’s construction start. The government will receive the full value of its investment in aircraft carriers under this option because both conventional and nuclear carriers will remain in the active fleet until the end of their expected service lives (see fig. 2.7). Nimitz-class nuclear carriers receive nuclear refuelings and complex overhauls after 23 years and are inactivated at the end of their 50-year service lives. Conventional carriers remain active for 45 years, entering the service life extension program after 30 years of service. After fiscal year 1994, only the U.S.S. Independence (CV-62) is inactivated before the end of its projected service life so that the U.S.S. United States (CVN-75) can be commissioned into the fleet in fiscal year 1998. This early inactivation will allow the Navy to maintain the 12-carrier force level, and carriers will only be built to replace others. The next carrier, CVA-76, is programmed to begin construction in fiscal year 2000 at NNS, and new construction start intervals would fluctuate between 3 and 10 years, similar to the BUR baseline force structure option. Figure 2.8 shows that this fluctuating new construction start rate results in a total employee level profile similar to that in the BUR option. During the near-term period of fiscal years 1995-99, the employment level under this option ranges from 7,500 to 10,000 compared with 11,000 and 15,000 under the BUR option. The decrease in the employment level could be mitigated by other shipyard work being directed by the government to NNS or by bidding for projects in the commercial shipbuilding market, such as liquified natural gas tankers or cruise ships. Since this option requires new ship construction and decommissioning only when needed, major procurement and investment costs are not incurred prematurely. Therefore, this option has the lowest value of outlays in the long term. Outlays for this option are over $2 billion less between fiscal years 1995 and 2015 and $6.5 billion less between fiscal years 1995 and 2035 than the option that transitions to conventional carrier construction with CVA-77. Also, this option’s outlays are approximately one-third less than those for the BUR baseline force structure option for fiscal years 1995-2015 and approximately 37 percent less than those between fiscal years 1995 and 2035. This option emphasizes maximizing the stability of NNS’ employment level through a sustained rate of new carrier construction, regardless of cost (see fig. 2.9). New nuclear carrier construction starts begin in fiscal year 1995 at a historical rate of every 3 years. All nuclear carriers receive their nuclear refuelings and complex overhauls but are retired early, after approximately 37 years. Conventional carriers in the fleet, the U.S.S. Independence (CV-62), the U.S.S. Kitty Hawk (CV-63), and the U.S.S. Constellation (CV-64), are retired before the end of their expected service lives as well. The benefit of this option is that NNS could sustain a workforce averaging over 20,000 employees with very few shifts in the overall employment level (see fig. 2.10). Employment levels remain above those under the BUR option throughout the 1995 to 2035 time frame. Constructing new nuclear carriers every 3 years is extremely expensive, and the outlays are significantly greater than those in the BUR baseline force structure option in the near term (fiscal years 1995-99), mid-term (fiscal years 1995-2015), and long term (fiscal years 1995-2035). This option requires more outlays because maintaining a 12-carrier force level at this construction rate requires the Navy to retire all of its carriers early, most with 25 percent of their service life remaining. Therefore, the Navy will need to fund costly nuclear carrier inactivations prematurely. This option procures 14 carriers between fiscal years 1995 and 2035, compared with 10 carriers under the BUR plan. This investment strategy represents the long-term investment implications of building carriers at historical rates to protect the carrier shipbuilding industrial base and employee levels. To support a sustained-rate construction program, the Navy would need to inactivate eight Nimitz-class nuclear carriers prematurely with 20 percent of their useful service life remaining. The new conventional carrier construction start is programmed for fiscal year 2000, and the follow-on conventional carriers have construction starts every 3 years. (See fig. 2.11.) No nuclear carriers are built after the completion of the U.S.S. United States (CVN-75). The nuclear capabilities at NNS would be sustained through a series of nuclear refuelings and complex overhauls of the Nimitz-class carriers through fiscal year 2024, some or all of the decommissioning work of the nuclear carrier fleet, and other nuclear repair and maintenance work. None of the remaining conventionally powered carriers would be decommissioned early except for the U.S.S. Independence (CV-62) to maintain a 12-carrier force when the U.S.S. United States (CVN-75) is brought into service in fiscal year 1998. NNS will have a severe drop-off in its workload between fiscal years 1996 and 2000 (see fig. 2.12) unless other work is directed to the shipyard. Consolidating all Atlantic Coast-based nuclear shipbuilding and overhaul work at NNS would help maintain nuclear capabilities and help mitigate the severe drop-off in the workload. Between fiscal years 2000 and 2014, the employment level at the shipyard averages about 17,500 employees, and between fiscal years 2015 and 2025 the employment level averages about 22,000 employees. In fiscal year 2026, the shipyard’s workforce level drops below 15,000 employees and does not return to the 15,000-employee level until fiscal year 2027. Due to the frequent new construction starts and the earlier decommissioning of the Nimitz-class nuclear carriers, this option costs approximately $8 billion more in the long term (fiscal years 1995-2035) than the conventional replacement rate strategy. During the near-term period (fiscal years 1995-99) this option still costs less than the conventional carrier option that builds CVA-77 in fiscal year 2002 because this option delays the new construction start and cancels the construction of CVN-76. Maximizing the NNS employment levels through a high-production rate is a very costly approach to maintaining a carrier force level in the long term, and the value of the total outlays is higher during this period than in any other conventional option. However, this option is still $11.5 billion less than the BUR option over the long term. This option is consistent with DOD’s plan to request funding for CVN-76 in fiscal year 1995. The next ship, however, would be a new design conventional carrier as shown in figure 2.13. The BUR report recommended the deferment of the advance procurement funding beyond fiscal year 1999 for the carrier after CVN-76 pending the completion of an evaluation of alternative aircraft carrier concepts for the next century, including the conventional carrier force option. Under this option, the construction start for CVA-77 is in fiscal year 2002. New starts for follow-on conventional ships are at 4-year intervals, which would support a sustained rate production program at NNS. The employment level under this option is projected to have fewer extreme increases and drop-offs than in the BUR plan. Nuclear carriers currently in the fleet will have 45- to 48-year service lives, requiring all of them to undergo nuclear refuelings and complex overhauls. Both the U.S.S. Independence (CV-62) and the U.S.S. Kitty Hawk (CV-63) will be inactivated 6 and 3 years, respectively, before the end of their estimated service lives. The plan requires that the U.S.S. John F. Kennedy (CV-67) remain in the active fleet 5 years longer than currently planned.This longer service life may be feasible for the ship in its new role as the reserve/training carrier because it will have a reduced tempo of operations, resulting in a reduced amount of “wear and tear.” This option maintains the workforce at NNS above the 10,000-employee level throughout fiscal years 1995-2035. The shipyard maintains a very stable employment level after fiscal year 2006—the workforce fluctuates between approximately 15,000 and 20,000 employees in fiscal years 2006-7, with only one significant drop in employment in fiscal year 2015. After fiscal year 2027, the employment level ranges between 11,900 and 16,500. (See fig. 2.14.) Since this option requires building CVN-76 in fiscal year 1995, the near term outlays are similar to those in the BUR baseline option. However, in the mid-term (fiscal years 1995-2015) and long term (fiscal years 1995-2035), the outlays are approximately 25 percent less than those in the BUR option. These savings could help reduce the Navy’s Recapitalization Plan projected annual funding shortfall of $3.5 billion in fiscal years 1999 and beyond. If the construction start for the next nuclear carrier—CVN-76—is delayed 3 years to fiscal year 1998, the Navy could maintain a 12-carrier force and maximize the service lives of its nuclear carriers. (See fig. 2.15.) All nuclear carriers will be refueled and overhauled, extending each carrier’s service life over 23 years to its full 50-year service life. This option creates fewer drastic shifts in the overall employment level than the BUR option because it has a new carrier construction start rate of every 4 to 5 years compared with the BUR rate of 3 to 7 years. Two conventional carriers, the U.S.S. Kitty Hawk (CV-63) and the U.S.S. Constellation (CV-64), are retained in the active fleet for several years longer than projected in the BUR option and are inactivated closer to or at the end of their projected useful lives. This alternative also retains the U.S.S. John F. Kennedy (CV-67) in the fleet 7 years past the BUR option’s plan. This ship, in its new role as the reserve/training carrier, will have a reduced tempo of operations and thus a reduced amount of wear and tear. Other carriers are replaced when required to meet force structure needs. Under this option, NNS’ employment level drops to around 7,500 employees and remains below the critical 10,000-employee level for about 3 years. As shown in figure 2.16, overall employment is more stable during fiscal years 2005 through 2034 than under the BUR option. Increased stability in shipyard employment requires fewer adjustments to the workforce over time. Compared to the BUR option, this option’s employment troughs are significantly smaller in fiscal years 2004, 2018, and 2025-26. The Navy could mitigate the employment decline in fiscal year 1998 by redirecting other shipbuilding and maintenance work to the yard, or, as the BUR suggested, by rescheduling the delivery of carriers under contract, overhauls, and other work. DOD’s financial investment requirement for this option is less than in the BUR option for the near term (fiscal years 1995-99), mid-term (fiscal years 1995-2015), and long term (fiscal years 1995-2035). The difference in outlays from fiscal years 1995 to 1999 for this option are approximately $1.6 billion less than the BUR option. Under this option, the Navy generally retains each nuclear carrier to the end of its useful 50-year service life and therefore will need to refuel each nuclear carrier after 23 years (see fig. 2.17). Two conventional carriers, the U.S.S. Kitty Hawk (CV-63) and U.S.S. Constellation (CV-64), are retained in the active fleet to the end of their expected service lives. Also, the U.S.S. John F. Kennedy (CV-67) will remain in the active fleet for a total of 50 years, 7 years longer than projected in the BUR option. This should be feasible, since the carrier will have a reduced tempo of operations as the reserve/training carrier. Only two nuclear carriers are retired before the end of their useful service lives—the U.S.S. Enterprise (CVN-65) 1 year early and the U.S.S. Nimitz (CVN-68) 2 years early. In addition, this option builds new carriers to replace carriers that are at the end of their service lives, which will lead to a stable new construction start rate every 4 to 5 years. DOD considered delaying the construction of CVN-76 until fiscal year 2000. However, the BUR concluded that, as a result of the delay, existing contracts would not be completed until the mid-1990s, and a lack of subsequent orders would threaten NNS’ viability by 1997. NNS will need to fill in a large gap in workload between fiscal years 1996 and 2001. The shipyard does have the capability to construct nuclear submarines and other surface ships and therefore could complete other types of shipyard work to compensate for the drop-off in workload. The shipyard will begin the nuclear refueling complex overhaul of the U.S.S. Nimitz (CVN-68) in fiscal year 1998 while it completes construction work on the U.S.S. United States (CVN-75), scheduled for commissioning in fiscal year 1998. This work will enable NNS to sustain a nuclear-capable workforce. Figure 2.18 shows that the overall employment level at NNS is at or below the critical 10,000-employee level in fiscal years 1996-2001. This option does not have as large a drop-off in the projected total workforce beginning in fiscal year 2014 than either the BUR option, in which employment level drops below 10,000, or the option to start construction of CVN-76 in fiscal year 1998. The financial outlays required for this option are less than any of the nuclear carrier force structure options for the near term (fiscal years 1995-99) and long term (1995-2035). In the near term, the outlays are less than half of those required for the BUR option because of the delay in the construction start of CVN-76. Using this option the Navy would not build a nuclear carrier before the transition to a conventional carrier construction program in fiscal year 2002, with the start of CVA-76. This option provides a 7-year design period, sustains a steady new carrier construction start interval of 3-1/2 years, and fully utilizes the service lives of almost all of the conventional carriers in the fleet. (See fig. 2.19.) The delay in the construction start enables several conventional carriers in the active force to remain in service longer than in the BUR plan. This option also provides for longer service lives for most carriers currently in the active fleet than under the Navy’s Recapitalization Plan. The U.S.S. Kitty Hawk (CV-63) and U.S.S. Constellation (CV-64) remain active slightly beyond their estimated notional lives, enabling these ships to complete a last deployment within their last maintenance cycle. The U.S.S. John F. Kennedy (CV-67) is programmed for a 50-year service life because of its reduced tempo of operations as the reserve/training carrier. Nimitz-class nuclear carriers remain in the fleet for 47 to 50 years. This option requires all Nimitz-class nuclear carriers to undergo nuclear refuelings and complex overhauls. As shown in figure 2.20, deferring construction of the next carrier until fiscal year 2002 results in continuing near-term declines in employment levels at NNS. The only carrier program work expected in the shipyard during that time period is the completion of construction of the U.S.S. United States (CVN-75) and the nuclear refueling complex overhaul of the U.S.S. Nimitz (CVN-68), which begins in fiscal year 1998. NNS would need other work to bring levels above the critical 10,000-employee level between fiscal years 1996 and 2001. After this period, employment levels average from 15,000 to 20,000 persons through fiscal year 2024. This option requires fewer outlays than any other option we examined except for option 1B’s (conventional carrier replacement rate) long-term estimate. The reduction in outlays is a result of delaying the construction start of the next aircraft carrier until fiscal year 2002, building conventional carriers that have a much lower procurement cost, and retaining carriers longer in the active fleet. The near-term outlays (fiscal years 1995-99) are approximately 35 percent of the BUR option’s outlays for the same period. In the long term (fiscal years 1995-2035), this option will save almost $19 billion in outlays over the amount projected to be spent for the BUR option. This option costs approximately $4.5 billion less in the long term than the option that begins conventional carrier construction with CVA-77. | GAO reviewed the Navy's aircraft carrier program, focusing on: (1) the budget implications of the options for meeting the Department of Defense's Bottom-Up Review (BUR) force structure requirement for 12 carriers; and (2) each option's effect on the shipbuilding contractor's employment levels. GAO found that: (1) there are several available options for maintaining the 12-carrier force at less cost than projected in the BUR and the Navy Recapitalization Plan; (2) the least expensive option maintains employment levels at or above minimum levels, authorizes building the proposed nuclear carrier in fiscal year 1995, switches to construction of conventionally powered carriers in later years, and would cost 25 percent less than the BUR and Navy Recapitalization Plan options; (3) options to build CVN-76 in fiscal year 1995 would stop the downward trend in employment at about the minimum sustaining employment level of 10,000 employees, require building carriers sooner than they are needed for force structure purposes, and incur expenses sooner than necessary; (4) options to delay building the carrier would result in a continuing decline in employment to about 7,500 employees; (5) in the long term the employment levels in the BUR and the Navy Recapitalization Plan also fall below 10,000 employees; (6) the option to build nuclear carriers at the historical rate of one every 3 years maintains stable employment levels but costs about 40 percent more than the options in the BUR and the Navy Recapitalization Plan; (7) options for using carriers for their full service lives are less expensive than those in the BUR and Navy Recapitalization Plan, but in the near term the employment levels fall below the minimum sustaining level; (8) a transition to constructing conventionally powered carriers would save the largest amount of investment resources; and (9) criteria for comparing nuclear and conventional carriers include affordability, operational effectiveness, potential utilization, availability of homeports and shipyards for maintenance, and supporting infrastructure such as training and disposal of nuclear materials. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Since passage of the Higher Education Act of 1965, a broad array of federal student aid programs, including loan programs, have been available to help students finance the cost of postsecondary education. Currently, several types of federal student loans administered by Education make up the largest portion of student loans in the United States. Four types of federal student loans are available to borrowers and have features that make them attractive for financing higher education. For example, borrowers are not required to begin repaying most federal student loans until after graduation or when their enrollment status significantly changes. Further, interest rates on federal student loans are generally lower than other financing alternatives, and the programs offer repayment flexibilities if borrowers are unable to meet scheduled payments. As outlined in table 1, the four federal loan programs differ in that interest rates may or may not be subsidized based on the borrower’s financial need, loans may be designed to specifically serve undergraduate or graduate and professional students, and loans may serve to consolidate and extend the payment term of multiple federal student loans. Education administers federal student loans and is generally responsible for, among other duties, disbursing, reconciling, and accounting for student loans and other student aid, and tracking loan repayment. Although no other federal agencies have a direct role in administering student loans, other agencies may become involved in the event that a borrower fails to make repayment. For example, Education may coordinate with Treasury to withhold a portion of federal payments to borrowers who have not made scheduled loan repayments. Such payment withholding, known as administrative offset, can affect payments to individuals by various federal agencies. Offsets of income tax refunds would involve the Internal Revenue Service and offsets of Social Security retirement or disability benefits would involve the Social Security Administration. Student loans are also available from private lenders, such as banks and credit unions. Private loans differ from federal loans in that they may require repayment to begin while the student is still in school, they generally have higher interest rates, and the rates may be variable as opposed to fixed. Unlike federal student loans, private student loans may be more difficult to obtain for some potential borrowers because they may require an established credit record and the cost of the loan may depend on the borrower’s credit score. Private student loans are a relatively small part of the student loan market, accounting for 10 to 15 percent of outstanding student loan debt—about $150 billion—as of January 2012. Older Americans—that is, Americans in or approaching retirement—may hold student loans for a number of reasons. For example, because such loans may have a 10- to 25-year repayment horizon, older Americans may still be paying off student loan debt that they accrued when they were much younger. They may also have accrued student loan debt in the course of mid- or late-career re-training and education. In addition, they may be holding loans taken out for the education of their children, either through co-signing or through Parent PLUS loans. According to the 2010 SCF, households headed by older individuals are much less likely than those headed by younger individuals to hold student loan debt. As of 2010, about 3 percent of surveyed households headed by people 65 and older—representing approximately 706,000 households—reported some student loan debt. This compares to 24 percent for households headed by those under 65—representing about 22 million households. The decrease in the incidence of student loan debt is even more marked for households headed by the oldest individuals—only 1 percent of those aged 75 or over reported such debt. Although few older Americans have student debt, a majority of households headed by those 65 and older reported having some kind of debt, most commonly home mortgage debt, followed by credit card and vehicle debt. While the incidence of all debt types declines for households headed by those 65 and over, the incidence of student loan debt declines at a much faster rate. For example, the incidence of student loan debt for the 65-74 age group is less than half of that for the 55-64 age group—4 percent compared to 9 percent. In contrast, the incidence of any type of debt for the older age group is only about 17 percent less than the younger age group—65 percent compared to 78 percent. While relatively few older Americans have student debt, data from the SCF suggest that the size of such debt among older Americans may be comparable to that of younger age groups. Among all age groups, the median balances of student and other types of debt are dwarfed by median balances of home mortgage debt. Estimates of median student debt balances for the various age groups range from about $11,400 to about $15,500. Median mortgage debt, in contrast, ranges from about $58,000 to $136,000 among the same groups. Among households headed by those 65 and older, the estimated median student debt was about $12,000, and among those 64 and younger, about $13,000. However, given the small number of older households with student loans, it is important to note that the estimate of student debt for the 65 and older age category is a general approximation. From 2004 to 2010, an increasing percentage of households in all SCF age groups have taken on student loan debt (see fig. 1). During the same period, the percentage of households headed by individuals 65 to 74 who had some student loan debt increased from just under 1 percent in 2004 to about 4 percent in 2010—more than a four-fold increase. The percentage of households having student loan debt in the two youngest age household categories—those 18 to 34 and those 35 to 44—were and remain much larger. Their rate of increase in that type of debt from 2004 to 2010 was comparatively modest—about 40 percent and 80 percent, respectively. Data from Education’s NSLDS also indicates substantial growth in aggregate federal student loan balances among individuals in all age groups, especially older Americans. Aggregate federal student loan debt levels more than doubled overall, rising from slightly more than $400 billion in 2005 to more than $1 trillion dollars in 2013 (see fig. 2). The total outstanding student debt for those 65 and older was and remains a small fraction of total outstanding federal student debt. However, debt for this age group grew at a much faster pace—from about $2.8 billion in 2005 to about $18.2 billion in 2013, more than a six-fold increase. Although the Direct PLUS Loan program offers parents of dependent undergraduate students the opportunity to borrow to finance their children’s education, data from Education suggests that most federal student loan debt held by older Americans was not incurred on behalf of dependents, but primarily for their own education. About 27 percent of loan balances held by the 50 to 64 age group was for their children, while about 73 percent was for the borrower’s own education (see fig. 3). For age groups 65 and over, the percentages of outstanding loan balances attributable to the borrowers’ own education are even higher. For those aged 65-74, 82 percent of the outstanding student loan balances was for the individual’s own education, and for the 75 and older group, this was true of 83 percent. Because information on the age of the loans was not readily available to us, we do not know the extent to which the debt of older Americans is attributable to recently originated loans or loans originated many years ago during their prime educational years. Although older borrowers hold a small portion of federal student loans, they hold defaulted loans at a higher rate than younger borrowers. Individuals 65 or older held 1 percent of outstanding federal student loans in fiscal year 2013 (see fig. 4). However, 12 percent of federal student loans held by individuals age 25 to 49 were in default, while 27 percent of loans held by individuals 65 to 74 were in default, and more than half of loans held by individuals 75 or older were in default. According to Education data, older borrowers are in default on federal student loans for their children’s education less frequently than they are in default on federal student loans for themselves. Specifically, in fiscal year 2013, 17 percent of Parent PLUS loans held by borrowers ages 65 to 74 were in default, while 30 percent of loans for their own education were in default. Delinquent borrowers—those who have missed one or more payments— have more than a year to resume payments or negotiate revised terms before facing collection procedures. During the initial year of delinquency for Direct Loans, Education and the loan servicers make a number of attempts to help borrowers arrange for payments and avert default (see fig. 5). After the loan has been delinquent for 425 days (approximately 14 months), Education determines whether to take actions intended to recover the money it is owed. These actions can have serious financial consequences for the borrower. For example, Education may charge collection costs up to 25 percent of the interest and principal of the loan. Interest on the debt continues to accumulate during the delinquency and default period. In addition, Education may garnish wages or initiate litigation. Education may also send the loan to a collection agency. The defaulted debt may also be reported to consumer reporting agencies, which can result in lower credit ratings for the borrower. Lower credit ratings may affect access to credit or rental property, increase interest rates on credit, affect employers’ decisions to hire, or increase insurance costs in some states. At 425 days, Education may also begin the process to send newly defaulted loans to Treasury to recover the debt by withholding a portion of federal payments—known as offset. Federal payments subject to offset include wages for federal employees, tax refunds, and certain monthly federal benefits, such as Social Security retirement and disability payments. Each year, Education prepares a list of newly defaulted loans for Treasury offset. In 2014, newly defaulted debt must have been more than 425 days delinquent before the July deadline so that it can be sent to Treasury in December. If the debt becomes 425 days delinquent after the cutoff, it would be sent the following December (2015). Thus, the defaulted debt is sent to Treasury 3 to 15 months after 425 days of delinquency—between 17 and 29 months from the last date of payment on the loan (see fig. 6). According to Education officials, loans that have not been paid off are annually recertified as being eligible for offset. After a defaulted loan is certified as eligible for offset to Treasury, certain payments, such as any available tax refunds, are offset immediately, without prior notice to the debtor. Borrowers with monthly benefits available for offset are informed by mail that their benefits will be offset in 60 days and again 30 days before the offset is taken, allowing borrowers an additional 2 months to resume payment on their loan before offset occurs. Treasury assesses a fee for each offset transaction, which is subtracted from the offset payment. Other federal agencies may charge additional fees for each transaction depending on the type of payment being offset. For fiscal year 2014, Treasury’s fee was $15 per offset and other agency fees were up to $27. Federal tax refunds are the source for more than 90 percent of offset collection for federal student loan debt. Offsets from Social Security benefits represented roughly $150 million in 2013 or less than 7 percent of the more than $2.2 billion in federal payments offset by Treasury. The number of borrowers, especially older borrowers, who have experienced offsets to Social Security retirement, survivor, or disability benefits to repay defaulted federal student loans has increased over time. In 2002, the first full year during which Social Security benefits were offset by Treasury, about 31,000 borrowers were affected. Of those borrowers, about 19 percent (6,000) were 65 or older. From 2002 through 2013, the number of borrowers whose Social Security benefits were offset has increased roughly 400 percent, and the number of borrowers 65 and over increased roughly 500 percent (see fig. 7). In 2013, Social Security benefits for about 155,000 people were offset and about 36,000 of those were 65 and over. The majority of Social Security benefit offsets for federal student loan debt are from disability benefits rather than retirement or survivor benefits. In 2013, 70.6 percent of defaulted borrowers (105,000) whose Social Security benefits were offset received disability benefits (see fig. 8). That year, about $97 million was collected through offset from disability benefits. For borrowers 65 and over, the majority of Social Security offsets are from retirement and survivor benefits because Social Security disability benefits automatically convert to retirement benefits at the beneficiary’s full retirement age, currently 66. About 33,000 borrowers age 65 and over had Social Security retirement or survivor benefits offset in 2013 to repay defaulted federal student loans. The amount of money collected from Social Security benefit offsets to repay defaulted federal student loans has also increased, but the average amount offset on a monthly basis per borrower has remained relatively stable. Treasury collected about $24 million in offsets from Social Security benefits in 2002, about $108 million in 2012, and about $150 million in 2013. However, over this period, the average amount offset on a monthly basis per borrower rose only slightly, from around $120 in the early 2000s to a little over $130 in 2013. Although there are statutory limits under the Debt Collection Improvement Act of 1996 (DCIA) on the amount that Treasury can offset from monthly federal benefits, the current limits may result in monthly benefits below the poverty threshold for certain defaulted borrowers. Social Security benefits are designed to replace, in part, the income lost due to retirement, disability, or death of the worker. The DCIA set a level of $750 per month below which monthly benefits cannot be offset. In 1998, the amount of allowable offset was effectively modified under regulations, to the lesser of 15 percent of the total benefit or the amount by which the benefit exceeds $750 per month, thus creating a standard more favorable to defaulted borrowers. For example, a borrower with a Social Security benefit of $1,000 per month would have an offset of $150, because that is the lesser of 15 percent of the benefit—$150—and the amount of the benefit over $750, which is $250. This offset would leave the borrower with a monthly benefit of $850, which is below the poverty threshold for 2013. The statutory limit of $750 for an offset was above the poverty threshold when it was set, in 1998. The offset limits have not changed since 1998, and the $750 limit represented about 81 percent of the poverty threshold for a single adult 65 and over in 2013. If the $750 limit had been indexed to the changes in the poverty threshold since 1998, in 2013 it would have increased by 43 percent or to about $1,073 (see fig. 9). Borrowers with benefits below this amount would not have been offset. Indexing monthly benefit offset limits to the poverty threshold can prevent some older borrowers from having offsets, but would also reduce Education’s recoveries from Social Security offsets. If the offset limit had been indexed to match the rate of increase in the poverty threshold, in 2013, 68 percent of all borrowers whose Social Security benefits were offset for federal student loan debt would have kept their entire benefit, including 61 percent of borrowers 65 and older. An additional 15 percent of all borrowers and borrowers age 65 and older would have kept more of their benefits in that year. However, indexing the offset limit would have reduced the amount collected from Social Security benefits by approximately 60 percent or $94 million in 2013, representing about 4.2 percent of all dollars offset from all sources by Treasury for student loan debt in that year. In conclusion, student loan debt and default are problems for a small percentage of older Americans. As the amount of student loan debt held by Americans age 65 and older increases, the prospect of default implies greater financial risk for those at or near retirement—especially for those dependent on Social Security. Most of the federal student loan debt held by older Americans was obtained for their own education, suggesting that it may have been held for an extended period, accumulating interest over time. The Social Security retirement or survivor benefits of about 33,000 Americans age 65 and older were reduced through offset to meet defaulted federal student loan obligations in 2013. Because the statutory limit at which monthly benefits can be offset has not been updated since it was enacted in 1998, certain defaulted borrowers with offsets are left with Social Security benefits below the poverty threshold. As the baby boomers continue to move into retirement, the number of older Americans with defaulted loans will only continue to increase. This creates the potential for an unpleasant surprise for some, as their benefits are offset and they face the possibility of a less secure retirement. Chairman Nelson, Ranking Member Collins, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staff have any questions about this testimony, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are listed in appendix II. To understand the extent to which older Americans have outstanding student loans and how this debt compares to other types of debt, we relied primarily on data from the Federal Reserve Board’s Survey of Consumer Finances (SCF), a survey that is conducted once every 3 years and gathers detailed information on the finances of U.S. families. SCF data is publicly available and was extracted from the Federal Reserve Board’s website. Specifically, we analyzed data from the 2004, 2007, and 2010 SCF to provide a range of information, including an overview of the percentage of families, by age of head of household, with student debt over time. An important limitation of the data is that debt, including student loans, is reported at the household level. As a result, the SCF survey responses represent the debt of the entire household, not just the head of household. Therefore, it is possible that for some households headed by older Americans, the reported student debt is actually held by children or other dependents that are still members of the household, rather than the older head of household. The NSLDS is a comprehensive national database maintained by the Department of Education that is used to readily access student aid data and track money appropriated as aid for postsecondary students. The database includes data on the various federal student loan programs. this testimony. The NSLDS data we obtained allows us to count federal student loans and loan balances, but not the number of borrowers. Although Education maintains borrower-level data, we were only able to obtain aggregated data by loan type during the course of our analyses. These summary tables reported that about 1,000 of the more than 6 million Parent PLUS loans outstanding in fiscal year 2013 were to borrowers under the age of 25. According to Education, these cases resulted from a reporting issue where the date of birth of the Parent PLUS borrower was the reported as being the same as that of the student. We excluded these Parent PLUS loans from our analysis. To understand the extent to which older Americans defaulted on federal student loans and the possible consequences of such a default, we relied on a number of data sources and agency documents related to federal student loans. To determine the extent to which older Americans have defaulted on federal student loans, we used data from the NSLDS summary tables we received from Education. To evaluate the consequences of default, we reviewed federal law, regulations, and agency documents describing the collection process for defaulted federal student loans, including offset of federal benefit payments through the Treasury Offset Program (TOP). We interviewed officials at Education involved in managing defaulted federal student loans, and we interviewed officials at Treasury, Education, and the Social Security Administration about the process for offsetting Social Security retirement, survivor, and disability benefits through the TOP. In addition, we interviewed Education officials and reviewed relevant documentation regarding Education’s debt collection policies and procedures; however, we did not audit their compliance with statutory requirements related to these activities. To describe the extent of Treasury offset of Social Security Administration benefits for federal student loan debt, we used data on offset payments from the TOP for fiscal years 2001 through 2014. We assessed the reliability of this data by reviewing data documentation, conducting electronic testing on the data, and interviewing Treasury staff about the reliability of this data. Because the TOP data does not include the age of borrowers or the type of Social Security benefits that were offset, we obtained such information for relevant borrowers from the Social Security Administration’s Master Beneficiary Record using a match on Social Security numbers. We assessed the reliability of the data by reviewing data documentation, obtaining the computer code used to match borrowers to the Master Beneficiary Record, and interviewing the staff at the Social Security Administration who conducted the match. We determined that the data elements we used were sufficiently reliable for the purposes of this testimony. For about 0.25 percent of borrowers, we were unable to determine the borrower’s age, and we excluded these borrowers from age-based analyses. For about 4.3 percent of offset payments, we were unable to determine the type of benefit, and we excluded these payments from the analysis of the type of benefit that was offset. To evaluate the extent to which Social Security benefits would have been offset if the $750 limit below which benefits are not offset had been adjusted for changes in the poverty threshold, we analyzed TOP data to impute the amount of a monthly Social Security benefit payment from the size of the offset that was taken from that payment. We then applied a modified set of rules for calculating an offset amount to the imputed benefit, changing the $750 limit to $1,072.50—the adjusted amount for the limit had it been indexed to the poverty threshold—to estimate, for 2013, whether the monthly benefit payment would have been offset had the offset limit increased at the rate of the poverty threshold. In addition to the contact named above, Michael Collins (Assistant Director), Michael Hartnett, Margaret Weber, Christopher Zbrozek, and Lacy Vong made key contributions to this testimony. In addition, key support was provided by Ben Bolitzer, Ying Long, John Mingus, Mimi Nguyen, Kathleen van Gelder, Walter Vance, and Craig Winslow. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Recent studies have indicated that many Americans may be approaching their retirement years with increasing levels of various kinds of debt. Such debt can reduce net worth and income, thereby diminishing overall retirement financial security. Student loan debt held by older Americans can be especially daunting because unlike other types of debt, it generally cannot be discharged in bankruptcy. GAO was asked to examine the extent of student loan debt held by older Americans and the implications of default. This testimony provides information on: (1) the extent to which older Americans have outstanding student loans and how this debt compares to other types of debt, and (2) the extent to which older Americans have defaulted on federal student loans and the possible consequences of default. To address these issues, GAO obtained and analyzed relevant data from the Federal Reserve Board's Survey of Consumer Finances as well as data from the Department of the Treasury, the Social Security Administration, and the Department of Education. GAO also reviewed key agency documents and interviewed knowledgeable staff. Comparatively few households headed by older Americans carry student debt compared to other types of debt, such as for mortgages and credit cards. GAO's analysis of the data from the Survey of Consumer Finances reveals that about 3 percent of households headed by those aged 65 or older—about 706,000 households—carry student loan debt. This compares to about 24 percent of households headed by those aged 64 or younger—22 million households. Compared to student loan debt, those 65 and older are much more likely to carry other types of debt. For example, about 29 percent carry home mortgage debt and 27 percent carry credit card debt. Still, student debt among older American households has grown in recent years. The percentage of households headed by those aged 65 to 74 having student debt grew from about 1 percent in 2004 to about 4 percent in 2010. While those 65 and older account for a small fraction of the total amount of outstanding federal student debt, the outstanding federal student debt for this age group grew from about $2.8 billion in 2005 to about $18.2 billion in 2013. Available data indicate that borrowers 65 and older hold defaulted federal student loans at a much higher rate, which can leave some retirees with income below the poverty threshold. Although federal student loans can remain unpaid for more than a year before the Department of Education takes aggressive action to recover the funds, once initiated, the actions can have serious consequences. For example, a portion of the borrower's Social Security disability, retirement, or survivor benefits can be claimed to pay off the loan. From 2002 through 2013, the number of individuals whose Social Security benefits were offset to pay student loan debt increased about five-fold from about 31,000 to 155,000. Among those 65 and older, the number of individuals whose benefits were offset grew from about 6,000 to about 36,000 over the same period, roughly a 500 percent increase. In 1998, additional limits on the amount that monthly benefits can be offset were implemented, but since that time the value of the amount protected and retained by the borrower has fallen below the poverty threshold. GAO is not making recommendations. GAO received technical comments on a draft of this testimony from the Department of Education, the Department of the Treasury, and the Federal Reserve System. GAO incorporated these comments into the testimony as appropriate. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
FAA’s primary mission is to provide the safest, most efficient aerospace system in the world. FAA oversees operating and maintaining this system, known as the NAS, as well as the safety of aircraft and operators. FAA operates and maintains the NAS through the following: a workforce of technicians, air traffic controllers, and other staff who work in airport towers, terminal areas, en-route centers, oceanic air traffic control centers, and other facilities, and the ATC and other supporting systems and infrastructure, including ground-based surveillance radar facilities, communication equipment, automation systems, and the facilities that house and support these systems. Various offices within FAA are responsible for the air traffic control system and its modernization through the NextGen initiative. The ATO, headed by the COO, is responsible for the day-to-day operations and maintenance of the air traffic control system. The NextGen Office, ATO, and Office of Aviation Safety are involved with various aspects of NextGen’s management and implementation. The Office of Airports is responsible for all programs related to airport safety and inspections, standards for airport design, construction, and operation. In this role, the Office of Airports supports the implementation of NextGen. These offices report to the Deputy Administrator, who also has the designation Chief NextGen Officer (see fig. 1). FAA receives funds annually through congressional appropriations into four accounts: The operations account funds, among other things, the operation and maintenance of the air traffic control system. The facilities and equipment account funds technological improvements to the air traffic control system, including NextGen. The research, engineering, and development account funds research on issues related to aviation safety and NextGen systems. The Airport Improvement Program account provides grants for airport planning and development. See figure 2 for percentage of fiscal year 2013 congressional appropriations by account. Congress appropriates funding from the Airport and Airway Trust Fund, which receives revenues from a series of excise taxes paid by users of the national airspace system, as well as from general revenues. The Trust Fund provides nearly all of the funding for FAA’s capital investments in the airport and airway system. Revenue sources for the trust fund include passenger ticket taxes, segment taxes, air cargo taxes, and taxes paid by both commercial and general aviation aircraft. The trust fund also provides a substantial portion of funding for operations—for example 80 percent of FAA’s $15.9-billion funding in fiscal year 2014. The remaining amount was appropriated from general revenues. Whereas FAA operates, maintains, and regulates the air traffic control system in the United States, in countries such as the United Kingdom, Germany, and Canada, their air navigation service providers (ANSP) are commercialized and handle the day-to-day operations of the air traffic control systems, while the governments regulate these activities. These ANSPs employ the workforce, maintain the infrastructure, and undertake modernization efforts. International ANSPs vary in the extent of government ownership and commercialization, with some as state-owned corporations, some as public-private partnerships, and some as private corporations. According to two recent international analyses comparing ANSPs from different countries on a range of performance measures including productivity, efficiency, and cost-effectiveness, FAA operates one of the most efficient ATC systems. According to a 2012 comparison of air traffic management performance between FAA and the combined 37 ANSPs of Europe, the United States had a similar arrival punctuality rate with Europe for a similar amount of continental airspace. Another international comparison, completed in 2013, of performance data from FAA and 22 global ANSPs showed similar results, with FAA ranking second in productivity. However, it is difficult to compare performance, as air spaces are different. For example, FAA’s ATC system controls about 60 percent more flights than Europe, its airspace is nearly twice as dense as that of the European ANSPs, and it has 23 percent fewer air traffic controllers. In addition, Europe has to coordinate among 37 ANSPs, while the United States has one. Although FAA is recognized for safety and relative efficiency, its attempts to modernize the ATC system have been less successful. We have chronicled the difficulties FAA has faced completing what it envisioned initially in 1981 as a 10-year program to upgrade and replace NAS facilities and equipment. For example, in August 1995, we found substantial cost and schedule overruns. To address these difficulties, in the past, Congress gave FAA acquisition and human capital flexibilities to improve the agency’s management of the modernization program. Specifically, in 1995, Congress directed FAA to implement new acquisition and personnel management systems and exempted the agency from certain federal acquisition and personnel laws and rules. In June 2005, we found that FAA had largely implemented these flexibilities. However, modernization difficulties persisted, and Congress directed FAA in 2003 to conceptualize and plan NextGen. NextGen was envisioned at that time as a major redesign of the air transportation system to increase efficiency, enhance safety, and reduce flight delays. NextGen is planned to incorporate precision satellite navigation and surveillance; digital, networked communications; an integrated weather system; and more. This complex undertaking requires acquiring new integrated air traffic control systems; developing new flight procedures, standards, and regulations; and creating and maintaining new supporting infrastructure. This transformation is designed to dramatically change the roles and responsibilities of both air traffic controllers and pilots and change the way they interface with their systems. The involvement of airlines and other aviation stakeholders is also critical, since full implementation of NextGen will necessitate airlines and others to invest in new avionics and other technologies to take advantage of NextGen technologies. See figure 3 for the expected benefits from NextGen implementation as depicted through improvements to the phases of flight. In addition, to address stakeholder and congressional concerns over NextGen management practices and the pace of modernization efforts over the last decade, FAA has reorganized several times. These changes included: In 2003, FAA hired a COO and in 2004 created the ATO to transform the air traffic control system into a more performance- based organization and improve the modernization effort. In 2011, FAA moved the office responsible for coordinating NextGen activities—the NextGen Office—out of the ATO and made it report directly to the Deputy Administrator to increase NextGen’s visibility within and outside of the agency and create a direct line of authority for NextGen. In 2012, FAA created the Program Management Office (PMO), within the ATO, to improve the oversight of ATO’s acquisition and implementation efforts, including those for NextGen. At the direction of the FAA Modernization and Reform Act of 2012, FAA created the Chief NextGen Officer position, currently held by the Deputy FAA Administrator, who reports directly to the FAA Administrator. However, challenges continue to persist, as we found in April 2013, August 2013, and February 2014. Specifically, we found that while FAA had made some progress in implementing the NextGen modernization program, FAA continued to experience challenges, including in the following areas: Human capital activities: Improving and sustaining NextGen leadership and preparing FAA’s workforce. Program management: Prioritizing projects to achieve some near-and mid-term benefits and managing NextGen interdependencies. Coordination with industry stakeholders: Gaining greater involvement from industry stakeholders in FAA’s initiatives and equipping aircraft with NextGen technologies. Transitioning to NextGen: Balancing the needs of the current ATC system and NextGen and consolidating and realigning FAA’s facilities. In these reports, we made six recommendations to FAA regarding the improvement of budget planning, performance-based navigation implementation, and stakeholder coordination and communication. DOT concurred with these recommendations, but as of August 2014, had not yet implemented them. Stakeholders’ views on FAA’s capability to operate an efficient ATC system generally align with the two international analyses described previously. Almost three-quarters (53) of the 72 stakeholders who provided a rating rated FAA as moderately to very able to operate an efficient ATC system. Four stakeholders did not rate FAA on this issue. (See table 1 for the stakeholders’ ratings.) In addition, during our interviews, over three times as many of the stakeholders specifically mentioned that the ATC system is generally efficient (37) than those who said the system is not (12). Fourteen stakeholders specifically said that FAA operates the most efficient system in the world. Notwithstanding this generally positive assessment, stakeholders raised areas where FAA could improve. For example, 29 stakeholders indicated that FAA does not handle irregular air traffic operations very well, such as those caused by inclement weather. Stakeholders’ views regarding NextGen implementation also reflect our past findings on FAA’s difficulties in implementing the initiative. Eighty percent (56) of the 70 stakeholders who provided a rating rated FAA as marginally to moderately able to implement NextGen. Six stakeholders did not rate FAA. (See table 1.) In addition, during our interviews, more than three times as many of the stakeholders (43) said that FAA’s overall implementation of NextGen was not going well than those who said it was going well (13), and 30 specifically mentioned that FAA was not doing well managing technology programs in general and NextGen acquisitions and contracts in particular. In our interviews with FAA senior management, officials acknowledged that stakeholders’ complaints about NextGen were not new. They also said that the agency is taking steps to improve implementation, that NextGen is now on track and that the agency is starting to focus more on using these technologies to improve flight efficiencies and reduce flight time and fuel use, steps and a focus that should result in stakeholders realizing tangible benefits in the future. Almost all (75) of the 76 stakeholders identified challenges that they stated FAA faces in improving ATC operations and overcoming difficulties in implementing NextGen. (See app. IV for a list and description of the challenges for FAA that stakeholders identified during our interviews.) The six challenges stakeholders noted most often are discussed below. These challenges are long-standing, as we have issued reports on them as far back as the 1980s, and more recently in the past few years. Automatic Dependent Surveillance- Broadcast (ADS-B) ADS-B, a key NextGen program, is a technology that enables aircraft to continually broadcast flight data—such as position, air speed, and altitude, among other types of information—to air traffic controllers and other aircraft. ADS-B Out is the ability to transmit ADS- signals from the ground and other aircraft, process those signals, and display traffic and weather information to flight crews. The Federal Aviation Administration required that airplanes be equipped with ADS-B Out by January 1, 2020. On the other hand, aircraft operators are not required to install ADS-B In, but may choose to do so, as is the case for most NextGen equipment. Consistent with what we have found in the past, stakeholders and FAA officials told us that ensuring that aircraft are equipped with avionics to take advantage of NextGen technologies is a challenge. Full implementation of NextGen will necessitate that system users make significant investment in new technologies. FAA estimated in 2013 that, of the estimated $18.1-billion overall implementation cost that is to be shared between airlines and FAA, airlines would need to invest $6.6 billion on avionics to realize the full potential benefits from NextGen capabilities. Forty-six of the stakeholders we interviewed raised this issue as a challenge for FAA, such as in convincing users to equip their aircraft with avionics to take advantage of NextGen technologies. Stakeholders explained that users have been reluctant to equip their aircraft due to the expense and uncertainty over FAA’s ability to meet timelines for deploying NextGen technologies. In April 2013, we found that airlines and other stakeholders had expressed skepticism about the progress FAA had made to date in implementing NextGen technologies, skepticism that, in turn, had affected their confidence about whether benefits would justify these investments. While some stakeholders agreed that equipping aircraft is necessary for successful and continuous modernization, they differed in who bore responsibility for paying for equipage—users or FAA. In August 2013, we noted that the 2012 FAA Modernization and Reform Act required FAA to report on options to encourage equipping aircraft with NextGen technologies and the costs and benefits of each option. FAA officials we interviewed said that they have completed the installation of the ground infrastructure for Automatic Dependent Surveillance-Broadcast (ADS-B) Out and that aviation system users, in turn, must equip their aircraft with ADS-B Out avionics by the FAA’s 2020 equipage deadline. Both the aviation stakeholders and FAA officials we interviewed regard budget uncertainty as a challenge for FAA. Forty-three stakeholders raised budget uncertainty as a difficulty for FAA’s ability to continue operation of an efficient ATC system and/or implementation of NextGen. One factor stakeholders raised as contributing to budget uncertainties is the annual appropriations process. In all but 3 of the last 30 years, Congress has passed “continuing resolutions” to provide funding for agencies to continue operating until agreement is reached on final appropriations. Further, according to the House Transportation and Infrastructure Committee, prior to the FAA Modernization and Reform Act of 2012, FAA had operated under 22 extensions, that provided short-term funding for the agency since the expiration of the 2007 Aviation Authorization legislation. According to some stakeholders, the stops and starts associated with continuing resolutions make it difficult for FAA to carry out long-term planning and strategic development of future technologies and innovation. We found in September 2009 and March 2013 that continuing resolutions can create budget uncertainty for agencies about both when they will receive their final appropriation and what level of funding will ultimately be available. We further found that operating under continuing resolutions can also complicate agency operations and cause inefficiencies, such as leading to repetitive work, limiting agencies’ decision-making options, and making trade-offs more difficult. On the other hand, attempting to mitigate the effects of an unpredictable funding stream is not a new challenge for FAA, or for many other federal agencies that have had to operate in times of an uncertain fiscal environment. Stakeholders also indicated that the current budgetary conditions—the fiscal year 2013 budget sequestration (the across-the-board cancellation of budgetary resources) along with the associated employee furloughs and the October 2013 government shutdown—have made FAA’s funding less predictable. In turn, this can make it difficult for FAA to run a 24/7 operation and maintain the ATC system as part of the transition to NextGen. In March 2014, we detailed the effects of the fiscal year 2013 budget sequestration on federal agencies, including FAA, such as reducing or delaying some public services and disrupting some operations. We found that the DOT took actions to minimize the effects of sequestration on FAA operations by beginning to plan for it during the summer of 2012, focusing on ensuring the safety of the traveling public, according to DOT officials. DOT halted these actions when it was provided with statutory authority to make a one-time transfer of $253 million between budget accounts to address these issues. As a result of this transfer, FAA minimized the number of planned furlough days and restored ATC services and other aviation activities; however, these efforts did not prevent delays from occurring in major metropolitan areas— including New York, Chicago, and Southern California—according to FAA, because fewer controllers were available to manage air traffic. FAA senior management generally agreed with the stakeholders’ perspective that unpredictable budgets make planning and managing the ATC system and NextGen programs difficult and result in delays and inefficiencies. The senior managers did not offer specific solutions; however, they indicated that if FAA received more funding that was available across fiscal years, rather than just for one fiscal year at a time, and had a greater ability to move funds between accounts, FAA would be able to improve its operations and NextGen implementation. Consistent with what we have found in the past, the stakeholders and FAA senior management agree that improving human capital activities is a challenge for FAA. Forty-two stakeholders identified human capital activities as a challenge for FAA in improving the efficiency of the ATC systems and/or implementing NextGen. Among the human capital challenges the stakeholders identified were matching workforce skills with FAA needs for hiring and staffing, insufficient training, and planning for upcoming retirements. FAA senior management also raised human capital challenges during our discussions with them. For example, one senior official acknowledged that providing required training is an element of delivering the full capability of NextGen and is a challenge but that FAA was working to address this challenge. We have also reported on FAA’s workforce training and staffing issues in the past. For example, in August 2013 we found that FAA had been working to address long-standing challenges associated with involving its air traffic controller and technician workforce in developing and implementing NextGen systems, steps that are critical to the successful implementation of NextGen. In addition, we found that during the NextGen transition, FAA would need a sufficient number of skilled controllers who are able to increasingly rely on automation, technicians who are able to properly maintain and certify both existing and NextGen systems, and a sufficient acquisitions workforce to successfully acquire NextGen systems and equipment. Stakeholders identified challenges in implementing new navigation procedures, and we have found similar challenges in previous work. A large percentage of the current U.S. air carrier fleet is equipped to fly using Performance Based Navigation (PBN) procedures, which are precise routes that use the Global Positioning System or glide descent paths (see fig. 4). While 21 stakeholders said the development and implementation of PBN-related procedures was improving or working well, almost twice as many, or 41, of the stakeholders said that this process was not working well or moving too slowly. Even when stakeholders said that there have been some things working well, such as successes like the Greener Skies Over Seattle initiative—a satellite- based navigation arrival procedure intended to save aviation system users more than two million gallons of fuel a year and significantly reduce aircraft exhaust and emissions—they pointed to other areas where implementation is taking too long. In April 2013, we found that FAA continues to face challenges in implementing PBN procedures and in explaining to stakeholders the benefits that accrue from their use. Specifically, FAA is not fully leveraging its ability to streamline the development of PBN procedures and the use of third parties to develop, test, and maintain these flight procedures. Senior FAA officials emphasized that their Optimization of Airspace and Procedures in the Metroplex (OAPM) initiative is yielding good results and pointed to the successful use of PBN procedures not only in Seattle but also in the areas around Houston, North Texas, Washington, D.C., and Denver. Officials also said that implementing PBN is one of their top priorities and is part of an effort to deliver near term-benefits and capabilities to system users by 2016. Officials explained that they are working on PBN, through several metroplex-based initiatives, and all parts of the country will not see PBN benefits at the same time. Consistent with what we have found in previous work, stakeholders told us that FAA needs to deliver benefits of NextGen in the near term. To convince aviation system users to make investments in NextGen equipment, FAA must continue to deliver systems, procedures, and capabilities that demonstrate near-term benefits and returns on users’ investments. Forty stakeholders identified as a challenge FAA’s inability to articulate to the industry what NextGen is and what near-term benefits NextGen is going to provide to users. Similarly, in April 2013, we noted the need for FAA to demonstrate to stakeholders NextGen benefits over the next few years. For example, we found that FAA had made some progress in key operational improvement areas, such as upgrading airborne traffic management to enhance the flow of aircraft in congested airspace, revising standards to enhance airport capacity, and focusing FAA’s PBN efforts at priority OAPM sites with airport operations that have a large effect on the overall efficiency of the NAS. However, we also found that in pursuing these near-term benefits, FAA had to make trade- offs in selecting sites and did not fully integrate implementation of its operational improvement efforts at airports. We concluded that because of the interdependency of the improvements, their limited integration could also limit benefits in the near term. Accordingly, we recommended, among other things, that FAA should proactively identify new PBN procedures for the NAS, based on NextGen goals and targets, and evaluate external requests so that FAA can select appropriate solutions and implement guidelines for ensuring timely inclusion of operational improvements at metroplexes such as OAPMs. DOT concurred with these recommendations and is working to address them. FAA senior managers said they were aware of stakeholders’ desire for near-term benefits and told us that they either have taken or plan to take the following steps to address stakeholders’ concerns. FAA plans to emphasize “high priorities” for users based on recommendations of two FAA advisory committees—the NextGen Advisory Committee (NAC) and the RTCA (once called the Radio Technical Commission for Aeronautics). The high priorities are new multiple runway operational procedures at 7 airports by fiscal year 2015, PBN procedures at 9 metroplexes and an additional two metroplexes by October 2014, surface surveillance at 44 airports by fiscal year 2017, and data communications to provide tower clearance delivery at 57 airports by fiscal year 2016. FAA has identified seven NextGen and NextGen-related programs that will be able to deliver near-term benefits and capabilities by 2016, with no additional requirements for users to equip their aircraft until the January 1, 2020, FAA-required deadline for aircraft to be equipped with ADS-B Out technology. The FAA Administrator has begun holding quarterly briefings on NextGen progress and benefits with airline chief executive officers (CEO); however, senior management noted that the diverse range of interests within the industry, and even between CEOs and operations staff within the same company, can make the communication of NextGen progress and benefits challenging. According to FAA’s Assistant Administrator for NextGen, in October 2014 FAA will release a road map outlining the official timeline of the implementation of its NextGen modernization project that will guide FAA through 2025. Stakeholders and FAA officials agree that a challenge for FAA is to maintain the ATC infrastructure through the transition to NextGen while also consolidating or closing aging facilities. Because NextGen represents a transition from existing ATC systems and facilities to new systems, it necessitates changes to or consolidation of existing facilities. Thirty-seven of 76 stakeholders mentioned that consolidating or closing older air traffic control facilities and the need to maintain older “legacy” systems was a challenge. Stakeholders noted congressional interest in preserving ATC facilities and the associated jobs in their districts as a cause for making it more difficult for FAA to close facilities. FAA officials acknowledged that reducing the “footprint” of the air traffic control infrastructure has been difficult but added that they are working on their first set of facility consolidation recommendations, as required by law, and will have those recommendations ready by the end of 2014. In August 2013, we found that if aging systems and associated facilities were not retired, FAA would miss potential opportunities to reduce its overall maintenance costs at a time when resources needed to maintain both systems and facilities may become scarcer and recommended that FAA develop a strategy for implementing the FAA’s Air Traffic Organization’s (ATO) plans. FAA concurred with this recommendation and is working to develop such a strategy by September 2014. An example of a facility FAA plans to close—a very high frequency omnidirectional radio range (VOR) station—is shown in figure 5. Overall, while stakeholders generally thought the current ATC system was operating at least moderately efficiently under FAA’s leadership, when asked what potential changes, if any, to FAA could improve the performance of ATC operations and NextGen implementation, 64 of the 76 stakeholders we interviewed suggested changes. The six most often suggested changes are discussed below. (See app. V for a list and examples of the changes to FAA that the stakeholders suggested during our interviews with them.) Some of these changes address the six challenges raised by stakeholders previously mentioned, while the rest address other challenges stakeholders identified. Change how FAA Is Funded. The change suggested by the most stakeholders (36 of the 64 stakeholders who suggested a change) was to modify how FAA’s ATC operations and NextGen programs are funded. As discussed earlier, budget uncertainty was raised by stakeholders as a challenge for FAA’s ATC operations, NextGen modernization, or both. While 36 stakeholders said a change to the funding process or source of funding was needed, most focused on the outcome they would like to see, namely a more stable or predictable funding stream. Fewer stakeholders (11) offered specific suggestions on how to achieve this outcome. For example, one stakeholder suggested providing FAA with a top-line budget number and then allowing FAA to determine how to allocate resources based on its priorities. FAA officials suggested changes to the agency’s funding mechanism that could improve FAA’s ability to operate the ATC system and implement NextGen, including allowing FAA the flexibility to use funds for their highest priority areas, increasing the fees for registering aircraft, and authorizing FAA to use multi-year funds. Improve Human Capital Activities. Twenty-four of the 64 stakeholders who suggested a change suggested human capital improvements. Stakeholder suggestions included updating the air traffic controller’s handbook, improving the training air traffic controllers receive on new technologies, and streamlining the hiring process. For example, stakeholders said changes were needed to streamline FAA’s air traffic controller-training programs and to ensure the best applicants are hired, especially as many current controllers begin to retire. In June 2008, we reported on FAA’s efforts to hire and train new controllers, in light of the expected departure, mostly due to retirements, of much of the current air traffic controller workforce of over 15,000 controllers between 2008 and 2017. We also found FAA needed to ensure that technician- and controller-training programs were designed to prepare FAA’s workforce to use NextGen technologies. Regarding updating the air traffic controller’s handbook, a senior FAA official said that stakeholders do not appreciate what changing the handbook involves, such as running safety scenarios and testing new procedures to ensure any changes do not adversely affect safety. More broadly, another senior FAA official said that shifting to NextGen would require a cultural change in how air traffic controllers are trained to respond to traffic. Improve Internal Collaboration. Twenty-four of the 64 stakeholders who suggested a change suggested FAA needs to improve internal collaboration within the organization. Stakeholders said different offices within FAA do not communicate well with one another and that this situation has resulted in difficulties and delays in the roll out of NextGen technologies and procedures. Stakeholder suggestions included improving how FAA’s lines of business work together to implement NextGen. In August 2013, we found that FAA is making progress in ensuring communication on NextGen issues across lines of business, for example, through the NextGen Management Board and biweekly program review meetings. In the same report, we also discussed how designating one leader, such as the Deputy Administrator’s responsibility over NextGen, can improve interagency collaboration and speed decision-making. While external stakeholders raised internal collaboration as an area in need of improvement, FAA senior management said that there are good working relationships between the lines of business responsible for ATC operations and NextGen implementation, especially between the Assistant Administrator of NextGen, the COO of the ATO, and the Associate Administrator of Aviation Safety. Streamline Processes. Twenty-three of the 64 stakeholders suggesting a change suggested that FAA needs to streamline some of its processes. Stakeholder suggestions included streamlining the development and implementation of flight navigation procedures, the certification of new aircraft equipment, and the acquisition of new technology. For example, to streamline its process for certifying new technology, one stakeholder said that FAA should use an approach that recognizes that once a type of equipment, such as an antenna, is found to be safe, every piece of that equipment produced does not have to be personally inspected by FAA. FAA officials said that they are making progress streamlining both the certification of new technology and development of new procedures; however, FAA must ensure that new procedures and technology are evaluated for potential safety and environmental concerns and that community outreach occurs. In April 2013, we found that FAA’s processes and requirements, while keeping the U.S. airspace safe, are also complex and lengthy. This includes the processes for developing PBN and other new flight navigation procedures. In the April 2013 report, we also found that FAA had efforts under way to address some of these issues, such as the Navigation Lean (NAV Lean) initiative, which is focused on streamlining the implementation and amendment processes for all flight procedures, but it will be several years before the impact is known. In June 2014, the Department of Transportation Inspector General’s office found that aviation stakeholders are unlikely to see the full benefits of the NAV Lean initiative, namely a reduction in the time it takes to implement new procedures, until September 2015 or later. In October 2010 and October 2013, we found inefficiencies in the certification and approvals process and variations in FAA’s interpretation of certification standards, and recommended improvements FAA could make to evaluate and track certification and approval processes. In October 2013, we also found that while FAA had developed milestones and deployed a tracking system to monitor each certification-related initiative, FAA had not identified overall performance metrics for these efforts to determine whether they would achieve their intended effects. Ultimately, we concluded that having efficient and consistent certification processes will allow FAA to better use its resources as its workload increases with the implementation of NextGen. Improve Coordination with Industry Stakeholders. Stakeholders acknowledged the improvements FAA has made in involving stakeholders in the planning and implementation of NextGen initiatives, especially through the NextGen Advisory Committee. However, 23 of the 64 stakeholders who suggested a change suggested FAA should do more to encourage participation and communication with industry stakeholders. For example, one stakeholder said that while FAA has improved its collaboration with industry stakeholders, particularly by including a wider range of stakeholders, FAA needs to ensure that stakeholders are involved early in the planning process for NextGen initiatives. FAA officials said that ensuring the appropriate stakeholders are involved in an effort is a challenge, but noted that FAA has on-going efforts to ensure the right stakeholders are involved to avoid some of the earlier difficulties rolling out NextGen programs. Similarly, in April 2013, we found that FAA is making progress in systematically involving industry stakeholders, air traffic controllers, and other key subject matter experts in its initiatives, such as the OAPM initiative. However, we have also recommended areas for improvement, such as developing and implementing guidelines for ensuring timely inclusion of appropriate stakeholders, including airport representatives, in the planning and implementation of NextGen improvement efforts. DOT concurred with these recommended areas for improvement and is taking steps to implement the recommendations. Increase Accountability. Twenty-one of the 64 stakeholders who suggested a change suggested FAA needs to increase accountability. Stakeholder suggestions included that FAA should hold its employees and management accountable for how well they accomplish program and plan goals and for how funds are spent. For example, one stakeholder suggested an annual operating plan could help hold FAA accountable to its performance goals. The need for more accountability at FAA, specifically regarding the implementation of NextGen, cuts across several areas we have previously reported on. In February 2014, we found that complex organizational transformations, such as NextGen, require substantial leadership commitment over a sustained period and that leaders must be empowered to make critical decisions and held accountable for results. In April 2013, we also found that to address accountability issues, FAA has taken steps, such as designating the Deputy Administrator as the Chief NextGen Officer with responsibility for all NextGen activities. In the same report, we also discussed that the use of performance measures would allow stakeholders to hold FAA accountable for results. In light of the ongoing discussion within the aviation industry on new approaches for operating and modernizing the ATC system, we also asked stakeholders about changing the provision of ATC services to improve ATC efficiency and NextGen implementation. These potential changes include moving the provision of ATC services out of FAA into a separate unit or organization and commercializing ATC services as has been done in Canada. Seventy percent of the stakeholders (53 of 76) agreed that separating ATC operations out from FAA was an option, but half of these stakeholders (26) voiced serious reservations or indicated such a change was unlikely to occur. Stakeholders also cited potential benefits of separating air traffic control operations from FAA, including a more predictable funding source; potentially reduced political involvement in ATC operational decisions; faster and less costly modernization of the ATC system; and more efficient day-to-day operations. The remaining stakeholders we interviewed were split between the opinion that a separate ATC system was not a good idea (12) and either not providing an opinion on this question or not answering it (11). See table 2 below for stakeholder responses. Stakeholders also raised several issues that would need to be taken into account before making changes to the provision of ATC services. Further, no stakeholder category was unanimous in either supporting or rejecting the option to change provision of ATC services. Airlines were generally more supportive of separating the ATC system from FAA than labor unions and professional associations. General aviation stakeholders were open to the idea but had reservations about the funding scheme. See table 2 for stakeholder responses to this question by industry category. In addition, FAA officials said that they were not opposed to privatization or commercialization of the ATC system, but they would rather focus on what services FAA should provide and what is the best way to pay for these services. Few stakeholders suggested a specific alternative structure for the provision of ATC services, although some listed potential characteristics of an alternative structure, such as user fees, public-private partnership, and a board of directors composed of system users. One example of a specific alternative structure suggested by a stakeholder was a Consumer Service Corporation with no shareholders, so as to avoid vested interests. Others suggested models similar to NAV CANADA, a non-profit trust with a board of representatives made up of industry and government, financed with user fees, and regulated by government. Both stakeholders and FAA officials said it was important to identify what problem or problems separating ATC services out of FAA is intended to solve, before proceeding with it as a solution. However, if a change were to be made, 65 of the 76 stakeholders suggested actions to take or raised issues or concerns to consider. These issues and concerns include the following: Funding: Forty of these 65 stakeholders said the source of funding for a separate air traffic control system is important to consider. Stakeholders suggested different sources of revenue to support a separated ATC system including user fees and a fuel tax. Several stakeholders suggested the possibility of accessing funds through capital markets as an advantage of a separated ATC system. Because the expectation of a future revenue stream (through user fees, for example) may enable a corporatized or privatized ATC system to access private capital markets (to obtain, for example, a bond issuance), a potential benefit of such a structure could be more reliable financing for multiyear investment projects as well as for operations. Lessons Learned: Thirty-eight of these 65 stakeholders suggested studying what the separation of air traffic services from FAA would look like. Stakeholders suggested looking at how air-navigation service providers function in other countries and trying to learn from their successes and mistakes. For example, one stakeholder said that given the efficiency of the current system, before any changes are made, there needs to be an analysis of how privatization would affect passengers, airlines, the aviation industry, and what improvements, including fewer delays and more capacity, it could offer. In a 2005 review of selected foreign air navigation service providers (ANSP), we also found some lessons learned during commercialization, including being prepared to mitigate the financial effects of an industry downturn; the importance of involving industry stakeholders in efforts to design, acquire, and deploy new technologies; balancing the business needs of an ANSP with smaller communities’ need for air service; and the importance of maintaining appropriate level of staff to carry out safety regulation. Congressional involvement: Twenty-nine of these 65 stakeholders suggested that the extent of Congress’s role in overseeing a separate ATC system must be clarified. For example, stakeholders said Congress’s oversight responsibilities of a separate ATC system and even whether Congress should have oversight of such a system needs to be considered. Regulatory coordination: Twenty-seven of these 65 stakeholders suggested that ensuring coordination between the safety regulator and a separate ATC system should be considered. Stakeholders noted, for example, that ensuring coordination might be more difficult with a separated ATC system than under the current structure. Governance: Twenty-five of these 65 stakeholders suggested that governance of a separated ATC system, such as including system users on an oversight body, needs to be considered. For example, one stakeholder asked who would be on a board of directors and how those individuals would be chosen. Safety: Twenty-four of these 65 stakeholders raised concerns about safety under a separate ATC system. Stakeholders were concerned about several issues, including the effect of a non-governmental operator’s profit motive on safety and whether requiring users to pay a fee to use air traffic control services may disincentivize use of the system. Transition management: Twenty-one of these 65 stakeholders raised concerns about how to transition from the current system to a separate ATC system. Stakeholder concerns included the length and difficulty of such a transition and included questions about what to do with the infrastructure and personnel in the current system, and the modernization efforts already under way. FAA senior management also cited transition management as a potential impediment to moving to a different air traffic control structure. Specifically, an FAA official said there does not appear to be an understanding by those advocating privatization of how to move from a government-operated system to a privatized system given the need to operate the NAS at a high level of safety and efficiency. FAA officials also raised concerns about the length and difficulty of such a transition. For example, one official mentioned that such a transition to a new organization would have to include cultural and personnel changes and could take many years to implement. Another official was concerned that a transition occurring now to a privatized system could negatively affect the implementation of NextGen. In November 2002, we found that successful change management initiatives in large private and public sector organizations can often take at least 5 to 7 years. Access: Nineteen of these 65 stakeholders also raised concerns about access to the NAS under a separate ATC system. For example, stakeholders were concerned about small communities losing access to the ATC system and that the fees charged by a separate ATC system might reduce general aviation’s access to the ATC system. We provided DOT with a draft of this report for its review and comment. DOT provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Transportation, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me on (202) 512-2834 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. Our work for this report focused on aviation stakeholders’ perspectives on the performance of the air traffic control (ATC) system and efforts to modernize it. This report examines stakeholder perspectives on: (1) the performance of the current ATC system and its modernization through the NextGen initiative, and any challenges the Federal Aviation Administration (FAA) may face in managing these activities; and (2) potential changes, if any, that could improve the performance of the ATC system, including FAA’s modernization initiative. We were also asked to obtain stakeholders’ perspectives on the safety of the National Airspace System (NAS). However, since nearly all stakeholders we interviewed agreed that the NAS is extremely or very safe, we did not focus on this area in this report. To obtain aviation stakeholders’ perspectives on these issues, we interviewed a non-probability sample of 76 aviation stakeholders. We created an initial list of stakeholders using internal knowledge of the aviation industry. We then added more stakeholders based on interviewee responses to our question on whom else they thought we should speak with. Specifically, we wanted to obtain perspectives from individuals and organizations with direct experience, as users, or knowledge, through research or study, of the current ATC system, modernization efforts, and FAA’s management of the system. As such, we limited our review to U.S.-based companies and airlines and sought the views of individuals and organizations with a stake in the performance of the NAS. We divided stakeholders into the following nine categories: airlines, airports, aviation experts and other relevant organizations, general aviation, labor unions and professional associations, manufacturers and service providers, other federal government agencies (Department of Defense and National Aeronautics and Space Administration (NASA)), passenger and safety groups, and research and development organizations. A list of the individuals and groups we interviewed is in appendix II. We used a semi-structured interview format with both closed- and open-ended questions to obtain aviation stakeholder perspectives on the efficiency of the current ATC, implementation of NextGen, and changes, if any, that could improve the operation of the ATC and implementation of NextGen. Our interview format contained four closed-ended questions with either a five-level scale or a yes/no response. These closed-ended questions, the response categories, and stakeholder responses are either included in the body of the report or in appendix III, as appropriate. The intent of our open-ended questions was to engage the stakeholders in a conversation about the issues they considered most important and relevant. The results of our review are not generalizable to the industry as a whole. Our discussion of the challenges FAA faces, potential changes to FAA, and issues to consider if the ATC system were separated from FAA is based on stakeholder responses to our open-ended questions. As such, the numbers we reported with these items represent those stakeholders that raised a challenge or issue to consider or suggested a change during our interview. When we report that 43 stakeholders raised budget uncertainty as a challenge, this does not necessarily mean that the remaining 33 stakeholders we interviewed disagreed. Rather, it means that those stakeholders did not raise it during the course of our interview. We analyzed the responses to these open-ended questions to identify the main themes raised by stakeholders. To ensure the accuracy of our content analysis, we internally reviewed our coding and reconciled any discrepancies. In discussing stakeholder responses to our open-ended questions, we aggregated their responses and reported on stakeholders’ perspectives in general. Stakeholder responses to the yes-no question: Do you think that separating the functions of safety regulator and ATC service provider into separate units or organizations is an option for the United States?—fell into four general responses, which we describe in this report as yes; maybe; no; and no opinion. Respondents who answered “yes” to this question said that separating the ATC service provider from the safety regulator (FAA) was not only an option, but also a good idea. While these respondents still provided issues to consider, they said that this option should be considered and were generally supportive of it. GAO classified respondents’ answers as “Maybe” for those who answered that this was an option, but generally said either it was not a good idea, it was not feasible in the United States, or that they had very strong reservations about such a change. Respondents who answered “no” generally said that it was a bad idea or would simply not work in the United States. Finally, some respondents indicated that they had “no opinion,” meaning that their organization did not have an official position on whether this change was an option in the United States. We reported on stakeholder responses to this closed-ended question— Do you think that separating the functions of the safety regulator and ATC service provider into separate units or organizations is an option for the United States?—by industry category. For the three industry categories with fewer than four respondents—other federal government agencies, passenger and safety groups, and research and development organizations—we combined these into one category and refer to them in table 2 as Other stakeholders. To obtain FAA senior management views on the preliminary results of our content analysis of stakeholder perspectives, we conducted semi- structured interviews with: Administrator; Deputy Administrator/Chief NextGen Officer; Assistant Administrator for NextGen; Associate Administrator for Aviation Safety; Chief Operating Officer (COO) of the Air Traffic Organization (ATO); and Assistant Administrator for Policy, International Affairs, and Environment. We reviewed GAO reports and other sources of aviation information to provide context to the challenges raised by the stakeholders and their suggested changes to the current structure. We identified reports that had discussed the stakeholder-identified themes, including collaboration with stakeholders, delivery of NextGen capabilities, and Performance-Based Navigation procedures, and FAA leadership in overseeing NextGen implementation. We conducted this performance audit from November 2013 to September 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Stakeholder Airlines for America (A4A) Cargo Airline Association (CAA) National Air Carrier Association (NACA) Regional Airline Association (RAA) Regional Air Cargo Carriers Association (RACCA) United Parcel Service (UPS) Houston Airport System (George Bush Intercontinental Airport, William P. Hobby Airport, and Ellington Airport) Los Angeles World Airports (Los Angeles International Airport, LA/Ontario International Airport, and Van Nuys Airport) Metropolitan Airports Commission (Minneapolis-St. Paul International Airport) Port Authority of New York and New Jersey (John F. Kennedy International Airport, Newark Liberty International Airport, LaGuardia Airport, Stewart International Airport, and Teterboro Airport) Airports Council International—North America (ACI-NA) American Association of Airport Executives (AAAE) Bill Ayer, Chair of the NextGen Advisory Committee (NAC) Air Traffic Control Association (ATCA) Michael Baiada, President and Chief Executive Officer, ATH Group Gary Church, President, Aviation Management Associates Dr. George Donohue, Systems Engineering and Operations Research, George Mason University Michael Dyment, Managing Partner, NEXA Capital Partners, LLC Amr ElSawy, President and Chief Executive Officer, Noblis Inc. Dr. Mark Hansen, Civil and Environmental Engineering, University of California, Berkley Dr. John Hansman, Aeronautics and Astronautics, Massachusetts Institute of Technology Robert Poole, Director of Transportation Policy, Reason Foundation RTCA (formerly known as the Radio Technical Commission for Aeronautics) Dr. Stephen Van Beek, Vice President, ICF International J. Randolph Babbitt, Former Administrator (2009-2011) Russell Chew, Former Chief Operations Officer, Air Traffic Organization (2003- 2007) Richard Day, Former Senior Vice President of Operations, Air Traffic Organization (2008-2010) David Grizzle, Former Chief Operations Officer, Air Traffic Organization (2011- 2013) Aircraft Owners and Pilots Association (AOPA) Helicopter Association International (HAI) National Air Transportation Association (NATA) National Business Aviation Association (NBAA) Air Line Pilots Association (ALPA) Allied Pilots Association (APA) Coalition of Airline Pilots Associations (CAPA) National Air Traffic Controllers Association (NATCA) NetJets Association of Shared Aircraft Pilots (NJASAP) Professional Aviation Safety Specialists (PASS) Southwest Airlines Pilots’ Association (SWAPA) Aerospace Industries Association (AIA) Aircraft Electronics Association (AEA) General Aviation Manufacturers Association (GAMA) United Technologies (UTC) Aerospace Systems Department of Defense (DOD) National Aeronautics and Space Administration (NASA) Travelers United (formerly Consumer Travel Alliance) MITRE Center for Advanced Aviation System Development (CAASD) This rating was not given to the stakeholders as a choice; however, the stakeholders’ answers fell between these categories. Description and examples of challenge cited by stakeholders Due to the expense and uncertainty over FAA’s ability to meet timelines for deploying NextGen technologies users have been reluctant to equip their aircraft Budget uncertainty makes it difficult for FAA to continue operation of an efficient ATC system and/or implement NextGen FAA does not match workforce skills with needs for hiring and staffing, provides insufficient training, and has insufficient planning for upcoming retirements FAA’s development and implementation of PBN-related procedures is not working well or is moving too slowly. FAA must continue to deliver systems, procedures, and capabilities that demonstrate near-term benefits and returns on users’ investments to convince aviation system users to make investments in NextGen equipment. FAA must plan for changes to or consolidation of existing facilities because NextGen represents a transition from existing ATC systems and facilities to new systems. FAA’s offices are stove-piped, do not share information with each other well, or are not horizontally integrated. FAA’s aversion to risk and focus on safety prevents improvements in efficiency and adoption of new technologies and procedures. FAA does not handle ATC operations well when airspace capacity is affected by congestion and disruptions due to, for example, inclement weather and power outages. Congress politicizes FAA’s budget and micromanages FAA operations. FAA’s organizational structure misplaces offices and blurs lines of authority and responsibilities. FAA does not communicate, coordinate, or collaborate well with the aviation industry. FAA does not plan well, such as setting unrealistic deadlines, or its plans lack clarity and precision. FAA’s leadership and political appointees lack the right professional background and experience. FAA does not operate airport surface operations well to accommodate increased air capacity or maintain surface infrastructure well. FAA’s policies and procedures are not up-to-date or lack clarity. FAA controllers in different regions and airports are not consistent in applying procedures, such as approach and departure procedures. There is little accountability in FAA, such as for NextGen delays. Description and examples of challenge cited by stakeholders FAAs’ process for certifying safety, aircraft, avionics, and personnel takes too long, or is inconsistent. FAA lacks adequate performance measures or its measures are output-related, instead of outcome-related. Examples of suggested changes FAA needs a more stable or predictable funding stream. FAA needs to improve human capital activities including updating the air traffic controller handbook, improving the training air traffic controllers receive on new technologies, and streamlining the hiring process. FAA needs to improve communication within the agency to reduce difficulties and delays in the roll out of NextGen technologies and procedures. Streamline processes FAA needs to streamline the development and implementation of flight navigation procedures, the certification of new aircraft equipment, and the acquisition of new technology. While FAA has made improvements involving stakeholders in the planning and implementation of NextGen initiatives, FAA should do more to encourage participation and communication with industry stakeholders. FAA should hold its employees and management accountable for how well they accomplish program and plan goals and for how funds are spent. There needs to be consistent and empowered leadership at FAA. FAA needs to ensure that all NextGen activities are overseen by one NextGen office. FAA needs to create relevant performance measures that measure improvements resulting from the implementation of NextGen. FAA needs to focus on delivering NextGen capabilities with near-term benefits. FAA needs to reconsider how it oversees the industry and/or reduce its layers of oversight. In addition to the individual named above, Catherine Colwell, Assistant Director; Amy Abramowitz; Sarah Arnett; William Colwell; Kevin Egan; Sam Hinojosa; David Hooper; Stuart Kaufman; Jennifer Kim; Josh Ormond; Amy Rosewarne; and Rebecca Rygg made key contributions to this report. FAA Reauthorization Act: Progress and Challenges Implementing Various Provisions of the 2012 Act. GAO-14-285T. Washington, D.C.: February 5, 2014. National Airspace System: Improved Budgeting Could Help FAA Better Determine Future Operations and Maintenance Priorities. GAO-13-693. Washington, D.C.: August 22, 2013. NEXTGEN Air Transportation System: FAA Has Made Some Progress in Midterm Implementation, but Ongoing Challenges Limit Expected Benefits. GAO-13-264. Washington, D.C.: April 8, 2013. Next Generation Air Transportation System: FAA Faces Implementation Challenges. GAO-12-1011T. Washington, D.C.: September 12, 2012. Air Traffic Control Modernization: Management Challenges Associated with Program Costs and Schedules Could Hinder NextGen Implementation. GAO-12-223. Washington, D.C.: February 16, 2012. Next Generation Air Transportation: Collaborative Efforts with European Union Generally Mirror Effective Practices, but Near-term Challenges Could Delay Implementation. GAO-12-48. Washington, D.C.: November 3, 2011. Next Generation Air Transportation System: FAA Has Made Some Progress in Implementation, but Delays Threaten to Impact Costs and Benefits. GAO-12-141T. Washington, D.C.: October 5, 2011. NEXTGEN Air Transportation System: Mechanisms for Collaboration and Technology Transfer Could Be Enhanced to More Fully Leverage Partner Agency and Industry Resources. GAO-11-604. Washington, D.C.: June 30, 2011. Aviation Safety: Status of Recommendations to Improve FAA’s Certification and Approval Processes. GAO-14-142T. Washington, D.C.: October 30, 2013. FAA Facilities: Improved Condition Assessment Methods Could Better Inform Maintenance Decisions and Capital-Planning Efforts. GAO-13-757. Washington, D.C.: September 10, 2013. Air Traffic Control: Characteristics and Performance of Selected International Air Navigation Service Providers and Lessons Learned from Their Commercialization. GAO-05-769. Washington, D.C.: July 29, 2005. | Over the past two decades, U.S. aviation stakeholders have debated whether FAA should be the entity in the United States that operates and modernizes the ATC system. During this period, GAO reported on challenges FAA has faced in operating and modernizing the ATC system. FAA reorganized several times in attempts to improve its performance and implement an initiative to modernize the ATC system, known as NextGen. Recent budgetary pressures have rekindled industry debate about FAA's efficiency in operating and modernizing the ATC system. GAO was asked to gather U.S. aviation industry stakeholder views on the operation and modernization of the current ATC system. This report provides perspectives from a wide range of stakeholders on (1) the performance of the ATC system and the NextGen modernization initiative and any challenges FAA may face in managing these activities and (2) potential changes that could improve the performance of the ATC system, including the NextGen modernization initiative. Based on GAO's knowledge and recommendations from interviewees, GAO interviewed a non-probability, non-generalizable sample of 76 U.S. aviation industry stakeholders—including airlines, airports, labor unions, manufacturers, and general aviation—using a semi-structured format with closed and open-ended questions. GAO also discussed the perspectives with current FAA officials. The Department of Transportation provided technical comments on a draft of this product. The 76 aviation industry stakeholders with whom GAO spoke were generally positive regarding the Federal Aviation Administration's (FAA) operation of the current air traffic control (ATC) system but identified challenges about transitioning to the Next Generation Air Traffic Control System (NextGen). Specifically, the majority of stakeholders rated FAA as moderately to very capable of operating an efficient ATC system, but the majority also rated FAA as only marginally to moderately capable of implementing NextGen, FAA's initiative to modernize the system. Almost all (75) of the stakeholders identified challenges that they believe FAA faces, particularly in implementing the NextGen initiatives. These challenges included difficulty in (1) convincing reluctant aircraft owners to invest in the aircraft technology necessary to benefit from NextGen (46 stakeholders) and (2) mitigating the effects of an uncertain fiscal environment (43 stakeholders). FAA officials acknowledged and generally agreed with these challenges. Sixty four stakeholders suggested a range of changes they believe could improve the efficiency of ATC operations and NextGen's implementation. The change stakeholders suggested most often was to modify how FAA's ATC operations and NextGen programs are funded, including the need to ensure that FAA has a predictable and long-term funding source. Other suggested changes were to improve human capital activities, such as air traffic controllers' training, and improve coordination with industry stakeholders. GAO has reported on these issues in the past, and in some cases, made recommendations, with which FAA concurred but has not yet implemented. GAO also asked stakeholders whether separating ATC services from FAA, such as the privatization of the ATC service provider, was an option; 27 of the stakeholders believed it was an option; another 26 believed it was an option, but had significant reservations about such a change. Support for this option was mixed among categories of stakeholders (see table below). Stakeholders identified several issues that would need to be taken into account before making any changes to the provision of ATC services, including lessons learned from other countries, funding sources for such a system, and the extent of Congress's role in overseeing a separate ATC system. a Maybe represents stakeholders who qualified their “Yes” responses with significant reservations. b Included in this other category are three industry categories with fewer than four stakeholders—Research & Development Organizations, Other Federal Agencies, and Passenger and Safety Groups. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The September 2001 Quadrennial Defense Review (QDR) outlined a strategy to sustain and transform the military force structure that has been in place since the mid-1990s. In this review, the Department of Defense (DOD) committed to selectively recapitalize older equipment items to meet near-term challenges and to provide near-term readiness. DOD recognized that the older equipment items critical to DOD’s ability to defeat current threats must be sustained as transformation occurs. DOD also recognizes that recapitalization of all elements of U.S. forces since the end of the Cold War has been delayed for too long. DOD procured few replacement equipment items as the force aged throughout the 1990s, but it recognizes that the force structure will eventually become operationally and technologically obsolete without a significant increase in resources that are devoted to the recapitalization of weapons systems. The annual Future Years Defense Plan (FYDP) contains DOD’s plans for future programs and priorities. It presents DOD estimates of future funding needs based on specific programs. Through the FYDP, DOD projects costs for each element of those programs through a period of either 5 or 6 years on the basis of proposals made by each of the military services and the policy choices made by the current administration. The 2003 FYDP extends from fiscal year 2003 to fiscal year 2007, and the 2004 FYDP extends from fiscal year 2004 to fiscal year 2009. Congress has expressed concerns that the military modernization budget and funding levels envisioned in the FYDP appear to be inadequate to replace aging equipment and incorporate cutting-edge technologies into the force at the pace required by the QDR and its underlying military strategy. As shown in table 1, of the 25 equipment items we reviewed, we assessed the current condition of 3 of these equipment items as red, 11 as yellow, and 10 as green. We were not able to obtain adequate data to assess the condition for the Marine Corps Maverick Missile because the Marine Corps does not track readiness trend data, such as mission capable or operational readiness rates, for munitions as they do for aircraft or other equipment. Rotary wing lift helicopters, specifically the CH-46E and the CH-47D helicopters, had the lowest condition rating among the equipment items we reviewed, followed by fixed wing aircraft. Although we assessed the condition as green for several equipment items such as the Army’s Abrams tank and the Heavy Expanded Mobility Tactical Truck, and the Marine Corps Light Armored Vehicle-Command and Control Variant, we identified various problems and issues that could potentially worsen the condition of some equipment items in the near future if not attended to. Specifically, for the Abrams tank, and similarly for the Heavy Expanded Mobility Tactical Truck, Army officials cited supply and maintenance challenges at the unit level such as repair parts shortages, inadequate test equipment, and lack of trained technicians that could impact the tank’s condition in the near future. While the Marine Corps has a Light Armored Vehicle-Command and Control Variant upgrade program under way, Marine Corps officials caution that any delays in the upgrade program could affect future readiness. According to service officials and prior GAO reports, the services are currently able to alleviate the effects of these problems, in many cases, through increased maintenance hours and cannibalization of parts from other equipment. The military services use a number of metrics to measure equipment condition. Examples include mission capable rates for aircraft, operational readiness rates for equipment other than aircraft, average age, and utilization rates (e.g., flying hours). The equipment items we assessed as red did not meet mission capable or operational readiness goals for sustained periods, were older equipment items, and/or had high utilization rates. For example, 10 of 16 equipment items for which readiness data were available did not meet mission capable or operational readiness goals for extended periods from fiscal year 1998 through fiscal year 2002. The average age of 21 of the equipment items ranged from about 1 year to 43 years. Some equipment items for which we assessed the condition as yellow also failed to meet mission capable or operational readiness goals and were more than 10 years old. However, offsetting factors, such as how frequently the equipment items did not meet readiness goals or by what percentage they missed the goals, indicated less severe and urgent problems than items we assessed as red. Other equipment items may have had high mission capable rates, but because of overall age and related corrosion problems, we assessed these equipment items as yellow to highlight the fact that these items could potentially present problems if not attended to within the next 3-5 years. The equipment items for which we assessed the condition as green generally met mission capable and operational readiness goals. While three of these equipment items—the Army Heavy Expanded Mobility Tactical Truck, the Air Force F-16, and the Marine Corps Light Armored Vehicle- Command and Control Variant—did not meet mission capable or operational readiness goals, we assessed the condition as green because the condition problems identified were less severe than the items we assessed as red or yellow. For example, an equipment item may have been slightly below the goal but only for non-deployed units, or the fleet-wide goals may have been met for the equipment item overall, although the specific model we reviewed did not meet the goals. In addition, although the rates for an equipment item may be slightly below its goal, it may be able to meet operational requirements. We also considered any upgrades that were underway at the time of our review that would extend the service life of the equipment. Maintenance problems were most often cited by the Army and Marine Corps officials we met with as the cause for equipment condition deficiencies for the equipment items we reviewed. Equipment operators and maintainers that we met with believed equipment degradation was the result of maintenance problems in one of two categories—parts or personnel. The parts problems include availability of parts or logistics and supply system problems. Availability problems occur when there are parts shortages, unreliable parts, or obsolete parts due to the advanced age of the equipment items. Logistics and supply system problems occur when it takes a long time to order parts or the unit requesting the parts has a low priority. In June, July, and August of 2003, we issued six reports highlighting deficiencies in DOD’s and the services’ management of critical spare parts. We also issued a report on problems DOD and the services are having dealing with corrosion for military equipment and that they had not taken advantage of opportunities to mitigate the impact of corrosion on equipment. Maintenance problems due to personnel include (1) lack of trained and experienced technicians and (2) increases in maintenance man-hours required to repair some of these aging equipment items. We reported in April 2003, for example, that DOD has not adequately positioned or trained its civilian workforce at its industrial activities to meet future requirements. Consequently, the Department may continue to have difficulty maintaining adequate skills at its depots to meet maintenance requirements. In most cases, the services have developed long-range program strategies for sustaining and modernizing the 25 equipment items that we reviewed. However, some gaps exist because the services either have not validated their plans for the sustainment, modernization, or replacement of the equipment items, or the services’ program strategies for sustaining the equipment are hampered by problems or delays in the fielding of replacement equipment or in the vulnerability of the programs to budget cuts. The two equipment items for which we assessed the program strategy as red are the KC-135 Stratotanker and the Tomahawk Cruise Missile because, although the services may have developed long-range program strategies for these equipment items, they have not validated or updated their plans for sustaining, modernizing, or replacing these items. In the case of the KC-135 Stratotanker, the Air Force has embarked on a controversial, expensive program to replace the tanker fleet, but as we have reported, it has not demonstrated the urgency of acquiring replacement aircraft and it has not defined the requirements for the number of aircraft that will be needed. Similarly, for the Tomahawk missile, the Navy has not identified how many of these missiles it will need in the future, thereby significantly delaying the acquisition process. We assessed the program strategy for eight of the services’ program strategies as yellow, some of them because they will be affected by delays in the fielding of equipment to replace the items in our review. According to service officials, as the delivery of new replacement equipment items is delayed, the services must continue using the older equipment items to meet mission requirements. Consequently, the services may incur increased costs due to maintenance that was not programmed for equipment retained in inventory beyond the estimated service life. For example, the planned replacement equipment for the Marine Corps CH-46E helicopter (i.e., the MV-22 Osprey) has been delayed by about 3 years and is not scheduled to be fielded until 2007. DOD has also reportedly cut the number of replacement aircraft it plans to purchase by about 8 to 10 over the next few years, thus the Marine Corps will have to retain more CH-46E helicopters in its inventory. Program management officials have requested additional funds to repair airframe cracks, replace seats, and move to light- weight armor to reduce aircraft weight, engine overhauls, and avionics upgrades to keep the aircraft safe and reliable until fielding of the replacement equipment. According to Marine Corps officials, the CH-46E program strategy has also been hampered by the 5-year rule, which limits installation of new modifications other than safety modifications into the aircraft unless 5 years of service are left on the aircraft. Procurement of the replacement equipment for the Marine Corps’ Assault Amphibian Vehicle has also been delayed (by 2 years), and it is not scheduled for full fielding until 2012. The program strategy for the Assault Amphibian Vehicle includes upgrades, but for only 680 of the 1,057 vehicles in the inventory. We also assessed the program strategy for some equipment items as yellow if they were vulnerable to budget cuts. For example, according to Navy officials, the Navy frigates’ modernization program is susceptible to budget cuts because the frigates’ future role is uncertain as the Littoral Combat ship is developed. In addition, the program strategy for the frigates is questionable because of the uncertainty about the role frigates will play. Specifically, Navy frigates are increasingly used for homeland defense missions, and their program strategy has not been updated to reflect that they will be used more often and in different ways. The Army’s CH-47D helicopter is also vulnerable to budget cuts. The Army plans to upgrade 279 CH-47D helicopters to F models under its recapitalization program; the upgrade includes a purchase of CH-47F model helicopters planned in fiscal year 2004. The fiscal year 2004 budget for this purchase has already been reduced. Program managers had also planned to purchase 16 engines, but funding was transferred to requests for higher priority programs. We assessed the program strategy for the remaining 15 equipment items as green because the services have developed long-range program strategies for sustaining, modernizing, or replacing these items consistent with their estimated remaining service life. For example, the Army has developed program strategies for all tracked and wheeled vehicles in our sample. Likewise, the Air Force has developed program strategies for most fixed wing aircraft in our sample throughout the FYDP. In the case of munitions, with the exception of the Navy Tomahawk Cruise Missile and Standard Missile-2, the services have developed program strategies for sustaining and modernizing the current missile inventory in our sample. In many cases, the funding DOD has requested or is projecting for future years in the FYDP for the equipment items we reviewed does not reflect the military services’ long-range program strategies for equipment sustainment, modernization, or recapitalization. According to service officials, the services submit their budgets to DOD and the Department has the authority to increase or decrease the service budgets based upon the perceived highest priority needs. According to DOD officials, for future years’ funding, the FYDP strikes a balance between future investment and program risk, taking into consideration the services’ stated requirements as approved by DOD. As shown in table 1, we assessed the funding for 15 of the 25 equipment items as red or yellow because the department’s requested funding did not adequately reflect its long-range program strategies for modernization, maintenance, and spare parts. For example, as shown in table 2, we identified fiscal year 2003 unfunded requirements totaling $372.9 million for four major aircraft equipment items we reviewed. The most significant funding shortfalls occurred when parts, equipment upgrades, and maintenance were not fully funded or when replacement equipment items were not fielded as scheduled. The equipment items for which we assessed the funding as yellow had funding shortfalls of a lesser extent than the red items. Although we assessed the funding as green for the remaining nine equipment items, program managers raised concerns about the availability of operation and maintenance funds in future years, and stated that insufficient operation and maintenance funds could potentially result in more severe condition problems and increased future maintenance costs. According to service officials, funding shortfalls occurred when parts, equipment upgrades, or maintenance were not fully funded or funds were reduced to support higher priority service needs. As we have previously reported, DOD increases or decreases funds appropriated by Congress as funding priorities change. Other shortfalls occur when units subsequently identify maintenance requirements that were not programmed into the original budget requests. In addition, when replacement equipment items are not fielded as scheduled, the services must continue to maintain these aging equipment items for longer than anticipated. Equipment items considered legacy systems such as the Marine Corps CH-46E helicopter may not receive funding on the basis of anticipated fielding of replacement equipment in the near future. The gaps between funding for legacy systems (which are heavily used and critical to the services’ mission) and funding for future replacement equipment result when fielding of the new equipment has been delayed and budgets have been reduced for maintenance of legacy systems. Funding for these legacy systems may also be a target for funding reductions to support higher service priority items. According to the program managers for some of the equipment items we reviewed (including the Army Abrams tank, Heavy Expanded Mobility Tactical Truck, and Navy EA-6B Prowler), as the services retain aging equipment in their inventories longer than expected, maintenance requirements increase, thus increasing operation and maintenance costs. Program managers raised concerns about the availability of sufficient operation and maintenance funding to sustain these aging equipment items in the future. Also, program managers stated that present sustainment funds (i.e., operation and maintenance funds) may only cover a small percentage of the equipment’s requirements, and they frequently rely on procurement funds to subsidize equipment improvements common to multiple equipment items. However, once production of the equipment item has been completed and procurement funds are no longer available for use, program managers must compete with the rest of the service for limited operation and maintenance funds. Program managers expressed concerns that operation and maintenance funds are not currently available to fund equipment improvements and noted operation and maintenance funds may not be available in the future. Based on our analysis of equipment condition, the performance of the equipment items in recent military conflicts, and discussions with service officials, program managers, and equipment operators and maintainers, we found that most of the equipment items we reviewed are capable of fulfilling their wartime missions despite some limitations. In general, the services will always ensure equipment is ready to go to war, often through surges in maintenance and overcoming obstacles such as obsolete parts, parts availability, and cannibalization of other pieces of equipment. Some of these equipment items (such as the Marine Corps CH-46E helicopter and all Air Force aircraft except the B-2) were used in Operation Desert Storm and have been used in other diverse operations such as those in Kosovo and Afghanistan. With the exception of the Army Stryker and GMLRS, all of the equipment items we reviewed were used recently in Operation Iraqi Freedom. The services, in general, ensure that equipment is ready for deployment by surging maintenance operations when necessary. Only one equipment item, the Marine Corps CH-46E helicopter, could not accomplish its intended wartime mission due to lift limitations. However, Marine Corps officials stated that they were generally satisfied that the CH-46E met its mission in Operation Iraqi Freedom despite these limitations. Of the remaining equipment items we reviewed, including all Air Force fixed-wing aircraft, all tracked and wheeled vehicles, and most munitions, service officials believe that most of these items are capable of fulfilling their wartime missions. According to service officials and program managers, while final Operation Iraqi Freedom after action reports were not available at the time of our review, initial reports and preliminary observations have generally been favorable for the equipment items we reviewed. However, these officials identified a number of specific concerns for some of these equipment items that limit their wartime capabilities to varying degrees. For example, only 26 out of 213 Marine Corps Assault Amphibian Vehicles at Camp Lejeune had been provided enhanced protective armor kits prior to Operation Iraqi Freedom. According to Marine Corps officials at Camp Lejeune, lack of the enhanced protective armor left the vehicles vulnerable to the large caliber ammunition used by the Iraqi forces. According to Navy officials, warfighting capabilities of the Navy EA-6B Prowler aircraft will be degraded if their capabilities are not upgraded and the outer wing panels are not replaced. Fleet commanders expressed concerns about potentially deploying some ships we reviewed with only one of three weapons systems capable of being used. However, program managers stated that plans were in place to reduce the vulnerability of these ships by fielding two compensating weapons systems. Although the military services are generally able to maintain military equipment to meet wartime requirements, the ability to do so over the next several years is questionable especially for legacy equipment items. Because program strategies have not been validated or updated and funding requests do not reflect the services’ long-range program strategies, maintaining this current equipment while transforming to a new force structure as well as funding current military operations in Iraq and elsewhere will be a major challenge for the department and the services. We do not believe, however, that the funding gaps we identified are necessarily an indication that the department needs additional funding. Rather, we believe that the funding gaps are an indication that funding priorities need to be more clearly linked to capability needs and to long- range program strategies. The military services will always need to meet mission requirements and to keep their equipment ready to fulfill their wartime missions. However, this state of constant readiness comes at a cost. The equipment items we reviewed appear to have generally fulfilled wartime missions, but often through increased maintenance for deployed equipment and other extraordinary efforts to overcome obstacles such as obsolete parts, parts availability, and cannibalization of other pieces of equipment. The reported metrics may not accurately reflect the time needed to sustain and maintain equipment to fulfill wartime missions. Substantial equipment upgrades or overhauls may be required to sustain older equipment items until replacement equipment items arrive. While our review was limited to 25 equipment items and represents a snapshot at a particular point in time, the department should reassess its current processes for reviewing the condition, program strategy, and funding for key legacy equipment items. Specifically we recommend that the Secretary of Defense, in conjunction with the Secretaries of the Army, Air Force, and the Navy, reassess the program strategies for equipment modernization and recapitalization, and reconcile those strategies with the services’ funding requests to ensure that key legacy equipment, especially those items needed to meet the strategy outlined in the September 2001 Quadrennial Defense Review, are sustained until replacement equipment items can be fielded. In reconciling these program strategies to funding requests, the Secretary of Defense should highlight for the Congress, in conjunction with the department’s fiscal year 2005 budget submissions, the risks involved in sustaining key equipment items if adequate funding support is not requested and the steps the department is taking to address those risks. As part of this process the department should identify the key equipment items that, because of impaired conditions and their importance to meeting the department’s military strategy, should be given the highest priority for sustainment, recapitalization, modernization, or replacement. If the Congress wants a better understanding of the condition of major equipment items, the department’s strategy to maintain or recapitalize these equipment items, and the associated funding requirements for certain key military equipment needed to meet the strategy outlined in the QDR, the Congress may wish to consider having the Secretary of Defense provide an annual report, in conjunction with its annual budget submissions, on (1) the extent to which key legacy equipment items, particularly those that are in a degraded condition, are being funded and sustained until replacement equipment items can be fielded; (2) the risks involved in sustaining key equipment items if adequate funding support is not requested; and (3) the steps the department is taking to address those risks. In written comments on a draft of this report, the Department of Defense partially concurred with our recommendation that it should reassess the program strategies for equipment modernization and recapitalization, and reconcile those strategies to the services’ funding requests. However, the department did not concur with our other two recommendations that it should (1) highlight for the Congress the risks involved in sustaining key equipment items if adequate funding support is not requested and the steps the department is taking to address those risks, and (2) identify the equipment items that should be given the highest priority for sustainment, recapitalization, modernization, or replacement. The department’s written comments are reprinted in their entirety in appendix III. In partially concurring with our first recommendation that it should reassess the program strategies for equipment modernization and recapitalization, and reconcile those strategies to the services’ funding requests, the department agreed that, while the overall strategy outlined in the September 2001 Quadrennial Defense Review may be unchanged, events over time may dictate changes in individual program strategies, that requires an order to meet the most current threat. The department stated, however, that through its past Planning, Programming, and Budgeting System and the more current Planning, Programming, Budgeting, and Execution processes, the department had and continues to have an annual procedure to reassess program strategies to ensure equipment maintenance, modernization, and recapitalization funding supports the most recent Defense strategy. While we acknowledge that these budget processes may provide a corporate, department-level review of what is needed to accomplish the national defense mission, the department’s budget and the information it provides to the Congress do not clearly identify the funding priorities for individual equipment items. For example, although the funding to sustain the department’s major equipment items is included in its Operation and Maintenance budget accounts, these budget accounts do not specifically identify funding for individual equipment items. We continue to believe that the department, in conjunction with the military services, needs to develop a more comprehensive and transparent approach for assessing the condition of key legacy equipment items, developing program strategies to address critical equipment condition deficiencies, and prioritizing the required funding. The department did not concur with our second recommendation that, in reconciling the program strategies to funding requests, it should highlight for the Congress, in conjunction with its fiscal year 2005 budget submissions, the risks involved in sustaining key equipment items if adequate funding support is not requested and the steps the department is taking to address those risks. Specifically, the department stated that its budget processes and the annual Defense budget provide the Congress a balanced program with all requirements “adequately” funded and that the unfunded requirements identified by the program managers or the services may not be validated at the department level. While we agree that the department’s budget may identify its highest funding priorities at the department wide level, it does not provide the Congress with an assessment of equipment condition deficiencies, unfunded requirements identified by the services, and the potential risks associated with not fully funding the services’ program strategies. In this report, we identify a number of examples of equipment condition deficiencies and inconsistencies between the program strategies and the funding requests to address those deficiencies that were not fully addressed in the department’s budget documents. We believe that the Congress, in its oversight of the department’s major equipment programs, needs to be better informed of specific equipment condition deficiencies, the long- range strategies and required funding to address those deficiencies, and the risks associated with not adequately funding specific equipment modernization and recapitalization requirements. The department also did not concur with our recommendation that it should identify for the Congress the key equipment items that, because of impaired condition and their importance to meeting the department’s military strategies, should be given the highest priority for sustainment, recapitalization, modernization, or replacement. In its comments, the department stated that, in developing the annual Defense budget, it has already allocated resources according to its highest priorities. The department further stated that key items that are vital to accomplishing the department’s mission are allocated funding in order to meet the requirements of the most current Defense strategy, and that there is no need to restate these priorities with a list. Similar to our rebuttal to the department’s response to our second recommendation as discussed above, we do not believe that the department’s annual budget provides the Congress with sufficient information on the most severe equipment condition deficiencies and the funding priorities for addressing those deficiencies. We believe that a separate analysis, in conjunction with the department’s budget submissions, that highlights the most critical equipment condition deficiencies, the planned program strategies for addressing those deficiencies, and the related funding priorities is needed to provide the Congress with the information it needs to make informed budget decisions. The department also noted in its written comments that our report identifies the CH-47D, CH-46E, KC-135, EA-6B, Standard Missile-2, and the Tomahawk missile as equipment items with problems and issues that warrant action within the next 1 to 3 years. The department stated that it would continue to reassess these equipment items as it goes through its resource allocation process. Lastly, the department provided technical comments concerning our assessments of specific equipment items in appendix II, including the KC-135 Stratotanker, Assault Amphibian Vehicle, MV-22, Tomahawk Cruise Missile, and the CH-46E Sea Knight Helicopter. We reviewed and incorporated these technical comments, as appropriate. The revisions that we made based on these technical comments did not change our assessments for the individual equipment items. In some cases, the data and information the department provided in its technical comments resulted from program and funding decisions that were made subsequent to our review. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me on (202) 512-8365 if you or your staffs have any questions concerning this report. Major contributors to this report are included in appendix IV. To determine the level of attention required by the Department of Defense, the military services, and/or the Congress for each of the 25 equipment items we reviewed, we performed an independent evaluation of the (1) equipments’ current condition; (2) services’ program strategies for the sustainment, modernization, or replacement of the equipment items; (3) current and projected funding levels for the equipment items in relation to the services’ program strategies; and (4) equipments’ wartime capabilities. Based on our evaluation of the condition, program strategy, and funding for each of the 25 equipment items, we used a traffic light approach—red, yellow, or green—to indicate the severity and urgency of problems or issues. We established the following criteria to assess the severity and urgency of the problems. indicates a problem or issue that is severe enough to warrant action by DOD, the military services, and/or the Congress within the next 1-3 years. We selected this time frame of 1-3 years because it represents the time frame for which DOD is currently preparing annual budgets. indicates a problem or issue that is severe enough to warrant action by DOD, the military services, and/or the Congress within the next 3-5 years. We selected this time frame of 3-5 years because it represents the near-term segment of DOD’s Future Years Defense Plan. indicates that we did not identify any specific problems or issues at the time of our review, or that any existing problems or issues we identified are not of a severe enough nature that we believe warrant action by DOD, the military services, and/or the Congress within the next 5 years. We selected this time frame of 5 years because it represents the longer-term segment of DOD’s Future Years Defense Plan. We also reviewed the wartime capability of the selected equipment items, focusing on the extent to which each equipment item is capable of fulfilling its wartime mission. Because of ongoing operations in Iraq and our limited access to the deployed units and related equipment performance data, we were unable to obtain sufficient data to definitively assess the wartime capability for each of the 25 equipment items we reviewed, as we did for each of the other three assessment areas. To select the 25 equipment items we reviewed, we worked with the military services and your offices to judgmentally select approximately two weapons equipment items, two support equipment items, and two munitions items from the equipment inventories of each of the four military services—Army, Air Force, Navy, and Marine Corps. We relied extensively on input from the military services and prior GAO work to select equipment items that have been in use for a number of years and are critical to supporting the services’ mission. We based our final selections on the equipment items that the military services believed were most critical to their missions. The 25 equipment items we selected for review include 7 Army equipment items, 6 Air Force equipment items, 7 Navy equipment items, and 5 Marine Corps equipment items. Our assessments apply only to the 25 equipment items we reviewed, and the results of our assessments cannot be projected to the entire inventory of DOD equipment. To assess equipment condition, we obtained and analyzed data on equipment age, expected service life, and the services’ equipment condition and performance indicators such as mission capable rates, operational readiness rates, utilization rates, failure rates, cannibalization rates, and depot maintenance data for each of the equipment items we reviewed. The specific data that we obtained and analyzed for each equipment item varied depending on the type of equipment and the extent to which the data were available. The scope of our data collection for each of the equipment items included both the active and reserve forces. We also met with the services’ program managers and other cognizant officials from each of the four military services for each of the 25 equipment items. In addition, we visited selected units and maintenance facilities to observe the equipment during operation or during maintenance and to discuss equipment condition and wartime capability issues with equipment operators and maintainers. Our observations and assessments were limited to equipment in the active duty inventory. To assess the program strategy for these equipment items, we reviewed the services’ plans for future sustainment, modernization, recapitalization, or replacement of the equipment items in order to meet the services’ mission and force structure requirements. We met with the services’ program managers and other military service officials to discuss and assess the extent to which the services have a strategy or roadmap for each of the 25 equipment items, and whether the program strategy is adequately reflected in DOD’s current budget or the Future Years Defense Plan. To assess equipment funding, we obtained and analyzed data on historical, current, and future years’ budget requests for each of the 25 equipment items we reviewed. We also reviewed the services’ budget requests, appropriations, and obligations for fiscal year 1998 through fiscal year 2003 to determine how the funds that had been requested and appropriated for each of the equipment items were used. In addition, we reviewed the Future Years Defense Program for fiscal year 2003 to fiscal year 2007 and for fiscal year 2004 to fiscal year 2008 to determine if the projected funding levels were consistent with the services’ program strategies for sustainment, modernization, recapitalization, or replacement of the selected equipment items. We also met with the services’ program managers for each of the 25 equipment items to identify budget shortfalls and unfunded requirements. We did not independently validate the services’ requirements. We were unable, however, to obtain specific information from the Office of the Secretary of Defense or the Joint Staff on the long-term program strategies and funding priorities for these equipment items because officials in these offices considered this information to be internal DOD data and would not make it available to us. To review the wartime capability of each equipment item, we discussed with military service officials, program managers, and equipment operators and maintainers the capabilities of the equipment items to fulfill their wartime missions and the equipments’ performance in recent military operations. Because of ongoing operations in Iraq and our limited access to the deployed units and related equipment performance data, we were unable to collect sufficient data to definitively assess wartime capability or to assign a color-coded assessment as we did with the other three assessment areas. We also reviewed related Defense reports, such as after action reports and lessons learned reports, from recent military operations to identify issues or concerns regarding the equipments’ wartime capabilities. We performed our work at relevant military major commands, selected units and maintenance facilities, and one selected defense combatant command. Our access to specific combatant commands and military units was somewhat limited due to their involvement in Operation Iraqi Freedom. The specific military activities that we visited or obtained information from include the following: U.S. Army, Headquarters, Washington, D.C.; U.S. Army, Office of the Assistant Secretary of the Army for Acquisitions, Logistics, and Technology, Washington, D.C.; U.S. Army Forces Command Headquarters, Atlanta, Ga.; U.S. Army, 1st Calvary Division, 118th Corps, Ft. Hood, Tx.; U.S. Army, Aviation and Missile Command, Redstone Arsenal, Precision Fire and Missile Project Office, Huntsville, Al.; U.S. Army, Tank and Armament Automotive Command, Warren, Mi.; U.S. Army, Cost and Economic Analysis Center, Pentagon, U.S. Army, Pacific, Ft. Shafter, Hawaii; and U.S. Army, 25th Infantry Division (Light), Schofield Barracks, Hawaii; U.S. Air Force, Headquarters, Plans and Programs Division, U.S. Air Force, Combat Forces Division, and Global Mobility Division, U.S. Air Force, Munitions Missile and Space Plans and Policy Division, U.S. Air Force, Air Logistics Center, Robins Air Force Base, Ga.; U.S. Air Force, Air Combat Command, Directorate of Requirements and Plans, Aircraft Division, and the Installation and Logistics Division, Langley Air Force Base, Va.; U.S. Air Force, Pacific, Hickam Air Force Base, Hawaii; U.S. Navy, Naval Surface Forces, Atlantic Fleet, Norfolk Naval Base, Va.; U.S. Navy, Naval Air Force, Atlantic Fleet, Norfolk Naval Base, Va.; U.S. Navy, Naval Weapons Station Yorktown, Va.; U.S. Navy, Naval Surface Forces, Pacific Fleet, Pearl Harbor, Hawaii; U.S. Navy, Naval Surface Forces, Pacific Fleet, Naval Amphibious Base, U.S. Navy, Naval Air Forces, Naval Air Station North Island, Coronado, U.S. Navy, Naval Weapons Station Seal Beach, Calif.; U.S. Navy, Electronic Attack Wing, U.S. Pacific Fleet, Naval Air Station Whidbey Island, Wash.; U.S. Navy, Naval Sea Systems Command, Washington Navy Yard, U.S. Navy, Naval Air Systems Command, Naval Air Station Patuxent River, Md.; U.S. Navy, Naval Air Depot, Naval Air Station North Island, Calif.; and U.S. Navy, Avondale Shipyard, Avondale, La.; U.S. Marine Corps, Systems Command, Quantico, Va.; U.S. Marine Corps, Aviation Weapons Branch, Pentagon, Washington, U.S. Marine Corps, Tank Automotive and Armaments Command, Warren, Mich.; U.S. Marine Corps, I Marine Expeditionary Force, Camp Pendleton, U.S. Marine Corps, II Marine Expeditionary Force, Camp Lejeune, N.C.; U.S. Marine Corps, Naval Research Lab, Washington, D.C.; U.S. Marine Corps, AAAV Technology Center, Woodbridge, Va.; and U.S. Marine Corps, Marine Forces Pacific, Camp Smith, Hawaii. We also obtained and reviewed relevant documents and reports from DOD and the Congressional Budget Office, and relied on related prior GAO reports. We performed our review from September 2002 through October 2003 in accordance with generally accepted government auditing standards. For the 25 equipment items, each assessment provides a snapshot in time of the status of the equipment item at the time of our review. The profile presents a general description of the equipment item. Each assessment area contains a highlighted area indicating the level of DOD, military service, and/or congressional attention each equipment item needs, in our opinion, based on our observations of each equipment item, discussions with service officials, and reviews of service-provided metrics. First delivered in the early 1980s, the Abrams is the Army’s main battle tank and destroys enemy forces using enhanced mobility and firepower. Variants of the Abrams include the M1, M1A1, and M1A2. The M1 has a 105mm main gun; the M1A1 and M1A2 have a 120 mm gun, combined with a powerful turbine engine and special armor. There are 5,848 tanks in the inventory, and the estimated average age is 14 years. The M1 variant will be phased out by 2015. The M1 and M1A2 variant are being upgraded to the M1A2 Systems Enhancement Program (SEP) by July 2004. We assessed the condition of the Abrams Tank as green because it consistently met its mission capable goal of 90 percent from fiscal year 1998 through fiscal year 2002. Although the Abrams met its mission capable goal, supply and maintenance operations at the unit-level are a challenge because of repair parts shortages, unreliable components, inadequate test equipment, and lack of trained technicians. There are concerns that the future condition of the Abrams could deteriorate in the next 5 years due to insufficient sustainment funds. The lack of funds could result in an increase of aging tanks and maintenance requirements. We assessed the program strategy for the Abrams as green because the Army has developed a long-term strategy for upgrading and phasing out certain variants of aging tanks in its inventory. The Army’s Recapitalization Program selectively implements new technology upgrades to reduce operations and support cost. Additionally, the Army is phasing out the M1A2 from its inventory by 2009, and procuring 588 M1A2 SEPS. The SEP enhances the digital command and control capabilities of the tank. The Army also developed a program for improving the Abrams M1A2 electronics called the Continuous Electronic Evolution Program, which is part of the SEP. The first phase of this program has been approved and funded. According to an Army official, the next phase is expected to start in approximately 5 years. We assessed the funding for the Abrams as yellow because current and projected funding is not consistent with the Army’s stated requirements to sustain and modernize the Abrams tank inventory. The Army reduced the recapitalization budget by more than 50 percent for the M1A2 SEP, thereby decreasing the number of upgrades from 1,174 to 588. Unfunded requirements for the Abrams tank include the vehicle integrated defense systems, safety and environmental fixes, and an improved driver’s viewer system. Without adequate funding, obsolescence may become a major issue once tank production ends and procurement funds are no longer available to subsidize tank requirements. Procurement funding for the M1A2 SEP will be completed by 2003 and deliveries completed by 2004. According to an Army official, the Abrams procurement funding provides approximately 75 percent to 80 percent of the tank requirements due to commonality among the systems. While we did not have sufficient data to definitively assess the wartime capability for the Abrams, a detailed pre-war assessment prepared by the program manager’s office indicated that the tank is not ready or able to sustain a long-term war. During Operation Iraqi Freedom, the Abrams tank was able to successfully maneuver, provide firepower, and protect the crew. Losses were attributed to mechanical breakdown and cannibalization. The detailed assessment by the program manager’s office, however, indicated that limited funding, war reserve spare part shortages, and supply availability could impact the tank’s ability to sustain a long-term war. The Apache is a multi-mission aircraft designed to perform rear, close, deep operations and precision strikes, armed reconnaissance and security during day, night, and adverse weather conditions. There are approximately 728 Apache helicopters in the Army’s inventory—418 AH-64A models and 310 AH-64D models. The fleet average age is about 12 years. We assessed the condition of the Apache as yellow because the Apache AH- 64D model failed to meet the mission capable goal of 75 percent approximately 50 percent of the time, from fiscal year 1999 through fiscal year 2002; however, according to officials, the Apache mission capable rates have consistently exceeded the 75 percent goal in calendar year 2003. Aviation safety restrictions were cited as the reason why the Apache failed to meet mission capable goals. A safety restriction pertains to any defect or hazardous condition that can cause personal injury, death, or damage to the aircraft, components, or repair parts for which a medium to high safety risk has been determined. These restrictions included problems with the (1) aircraft Teflon bushings, (2) transmission, (3) main rotor blade attaching pins, (4) generator power cables, and (5) the removal, maintenance and inspection of the Auxiliary Power Unit Takeoff Clutch. The Army’s Recapitalization Program includes modifications that are intended to address these safety restrictions. We assessed the program strategy for the Apache as green because the Army has developed a long-term program strategy to sustain and upgrade the aging Apache fleet. The Army’s Recapitalization Program addresses costs, reliability, and safety problems, fleet groundings, aging aircraft, and obsolescence. The Army plans to remanufacture 501 AH-64A helicopters to the AH-64D configuration. The goal is to reduce the fleet average age to 10 years by 2010, increase the unscheduled mean time between removal by 20 percent for selected components, and generate a 20 percent return on investment for the top 10 cost drivers. The Army is on-schedule for fielding the Apache AH-64D. While we did not have sufficient data to definitively assess the wartime capability of the Apache, Army officials did not identify any specific concerns. These officials indicated that the Apache successfully fulfilled its wartime missions in Afghanistan and Operation Iraqi Freedom. In Operation Iraqi Freedom, the AH-64D conducted combat operations for both close combat and standoff engagements. Every mission assigned was flown and accomplished with the Apache AH-64D. The Longbow performance has been enhanced by targeting and weapon systems upgrades that have improved the Longbow performance over the AH-64A. The Stryker is a highly deployable-wheeled armored vehicle that employs 10 variations—the Infantry Carrier Vehicle (ICV), Mortar Carrier (MC), Reconnaissance Vehicle (RV), Commander Vehicle (CV), Medical Evacuation Vehicle (MEV), Engineer Squad Vehicle (ESV), Anti-Tank Guided Missile Vehicle (ATGM), and Fire Support Vehicle (FSV), the Mobile Gun System (MGS), and the Nuclear Biological and Chemical Reconnaissance Vehicle (NBCRV). There are 600 Stryker vehicles in the Army’s inventory, and the average age is less than 2 years. The Army plans to procure a total of 2,121 Stryker vehicles through fiscal year 2008. We assessed the condition of the Stryker as green because it has successfully achieved the fully mission capable goal of 95 percent, based on a 3-month average from April 2003 through July 2003. The Congress mandated that the Army compare the operational effectiveness and cost of an infantry carrier variant of the Stryker and a medium Army armored vehicle. The Army compared the cost and operational effectiveness of the Stryker infantry carrier against a medium armored vehicle. The Army selected the M113A3, and the comparison shows the Stryker infantry carrier vehicle is more survivable and provides better overall performance and mobility when employed in combat operations than the M113A3. We assessed the program strategy for the Stryker as green because the Army developed a long-term program strategy for procuring a total of 2,121 vehicles through fiscal year 2008, which will satisfy the total requirement. Of the 600 currently in the inventory, 449 are at 2 brigades—a 3rd brigade of the 2nd Infantry Division and the 1st brigade of the 25th Infantry Division, both of which are located at Fort Lewis, Washington. The other 151 are at fielding sites, training centers, and the Army Test and Evaluation Center. The remaining 1,521 will be procured through fiscal year 2007 with expected deliveries through fiscal year 2008. The next brigade scheduled to receive the Stryker is the 172nd Infantry Brigade at Forts Richardson and Wainwright, Alaska. The remaining Stryker Brigades Combat Teams to be equipped with the Stryker are the 2nd Cavalry Regiment, Fort Polk, Louisiana; 2nd Brigade, 25th Infantry Division, Schofield Barracks, Hawaii; and 56th Brigade of the 28th Infantry Division, Pennsylvania Army National Guard. We assessed the funding for the Stryker as green because current and projected funding is consistent with the Army’s stated requirements to sustain the Stryker program. The program is fully funded to field the six Stryker brigade combat teams. Approximately $4.1 billion has been allocated for all six combat teams through fiscal year 2009. The Secretary of Defense has authorized the procurement of the first three brigades, but the fourth brigade cannot be procured until the Secretary of Defense solidify to Congress that the results of the Operational Evaluation mandated by Congress indicated that the design for the interim brigade combat team is operationally effective and operationally suitable. The evaluation was completed in May 2003 and results are being finalized. While we did not have sufficient data to definitively assess the wartime capability of the Stryker, the Army did not identify any specific concerns regarding the system being able to meet its wartime mission. The Stryker has not yet been used in any conflict situation. In May 2003, GAO reported that the Army Test and Evaluation Command concluded that the Stryker provided more advantages than the M113A3 in force protection, support for dismounted assault, and close fight and mobility, and was more survivable against ballistic and non-ballistic threats. The CH-47 helicopter is a twin-engine, tandem rotor helicopter designed for transportation of cargo, troops, and weapons. The Army inventory consists of 426 CH-47D models and 2 CH-47F models. The CH-47F Improved Cargo Helicopter is a remanufactured version of the CH-47D and includes a new digital cockpit and a modified airframe to reduce vibration. The overall average age of the CH-47 is 14 years old. The Army plans to convert 76 D model aircraft to the F model between fiscal years 2005 and fiscal year 2009. We assessed the condition of the Chinook as red because it consistently failed to meet the Army’s mission capable goal of 75 percent from fiscal year 1998 to fiscal year 2002. Actual mission capable rates ranged from 61 percent to 69 percent. Army officials attributed the failure to meet the 75 percent mission capable goal to aging equipment, supply shortages, and inexperienced technicians. Maintaining aircraft has become increasingly difficult with the CH-47D failing to meet the non-mission capable maintenance goal of 15 percent, increasing from 27 percent in fiscal year 1998 to 31 percent in fiscal year 2002. We assessed the program strategy for the Chinook as yellow because the Army has developed a long-term strategy for upgrading and replacing the Chinook, but the strategy is not consistent with the Army’s funding priorities. There has been a delay in the plan to upgrade 279 D models to F models between fiscal year 2003 and fiscal year 2017 under the Army’s Recapitalization Program, reducing the number of CH-47F helicopters planned in the fiscal year 2004 budget by five due to unexpected funding constraints. These budgetary constraints also delayed the Army’s plans to purchase 16 engines because funding was transferred to support other non- recurring requirements. Readiness may be adversely affected if these engines are not procured because unit requisitions for these engines will not be filled and aircraft will not be fully mission capable. We assessed the funding for the Chinook as yellow because current and projected funding is not consistent with the Army’s requirements for sustaining and upgrading the Chinook helicopter. At present, the Army has identified unfunded requirements totaling $316 million, with $77 million needed to procure the five CH-47Fs and the 16 engines for which the funds had been previously diverted. The remaining $239 million would support other improvements including common avionics system, rotor heads, crashworthy crew seats, and engine modifications. The Army will resolve some or all of these requirements with projected funding of $3 billion to support the CH-47 program through fiscal year 2017. While we did not have sufficient data to definitively assess the wartime capability for the Chinook, Army officials indicated that it successfully fulfilled its wartime mission for Operation Iraqi Freedom despite current condition problems. These officials stated that the deployed units were able to overcome these condition problems because the deployed aircraft were provided a higher priority than non-deployed aircraft for spare parts. As a result, the estimated mission capable rates for deployed aircraft increased to about 86 percent during the operation. The HEMTT provides transport capabilities for re-supply of combat vehicles and weapon systems. The HEMTT’s five basic configurations include the cargo truck, the load handling system, wrecker, tanker, and tractor. The HEMTT entered into the Army’s inventory in 1982. The current inventory totals about 12,500 and the average age is 13 years. We assessed the condition of the HEMTT as green because mission capable rates have been close to the Army’s 90 percent goal, averaging 89 percent between fiscal year 1998 and fiscal year 2002. Moreover, the overall supply availability rates have exceeded the 85 percent goal from May 2002 to October 2002, averaging between 96 percent and 99 percent, respectively. In some instances, however, meeting the operational goals has been continually challenging because of aging equipment, heavy equipment usage, and the lack of trained mechanics. The lack of trained mechanics may also impact the Army’s future ability to meet the specified mission capable goals. In addition, a detailed pre-war assessment by the program manager’s office indicated that concerns regarding shortages of spare parts would significantly degrade the HEMTT readiness rates. We assessed the program strategy for the HEMTT as green because the Army has developed a long-term program strategy for sustaining and modernizing the HEMTT inventory. The Army’s plans include procuring 1,485 new tankers and wreckers through fiscal year 2007, which will satisfy the Army’s stated requirement. The Army also plans to rebuild some of the existing vehicles through the HEMTT Extended Service Program. This program, scheduled to be complete in fiscal year 2012, will insert technology advancements and will provide continuous improvements to the vehicle. Although there has been a reduction in the Army’s budget for the Extended Service Program, the plan is to continue rebuilding trucks in smaller quantities and at a slower pace. The Army’s Forces Command has implemented a Vehicle Readiness Enhancement Program that serves as an interim maintenance program for the HEMTT awaiting induction into the Extended Service Program. We assessed the funding for the HEMTT as yellow because current and projected funding is not consistent with the Army’s stated requirements to sustain and modernize the HEMTT inventory. Specifically, the Army has unfunded requirements of $10.5 million as of fiscal year 2003, of which $3.9 million is for spare parts and $6.6 million is for war reserves. In addition, the Army reduced the Recapitalization Program by $329 million. The Army had planned to upgrade 2,783 vehicles currently in the inventory; however, 1,365 will not be upgraded as a result of the reductions in the Recapitalization Program. Consequently, according to Army officials, maintenance and operating and support costs will likely increase. While we did not have sufficient data to definitively assess the wartime capability for the HEMTT, Army officials indicated that it has successfully fulfilled its wartime requirements during recent combat operations. Based on the program manager’s preliminary observations, the HEMTT performed successfully during Operation Iraqi Freedom. A detailed pre-war assessment by the program manager’s office indicated that the HEMTT was ready for war, but could experience sustainment problems due to a shortage of war reserve spare parts. The program manager’s office is currently assessing the condition of the active and war reserve equipment used in Operation Iraqi Freedom. The PAC-3 missile is considered a major upgrade to the Patriot system. Sixteen PAC-3 missiles can be loaded on a launcher versus four PAC-2 missiles. The Army plans to buy 2,200 PAC-3 missiles. The Army had a current inventory of 88 PAC-3 missiles as of July 2003. The average age of the PAC-3 missile is less than 1 year. We assessed the condition of the PAC-3 missile as green because approximately 89 percent of the missiles in the inventory were ready for use as of July 2003. Specifically, of the 88 PAC-3 missiles currently in the inventory, 78 were ready for use and 10 were not. In addition, the Army has not experienced any chronic or persistent problems during production. The PAC-3 missile completed operational testing and was approved for full production of 208 missiles in 2003 and 2004. We assessed the program strategy for the PAC-3 missile as green because the Army has developed a long-term strategy for sustaining the PAC-3 inventory, including procurement of 2,200 missiles that will satisfy the total requirement. The Army plans to purchase 1,159 PAC-3 missiles through fiscal year 2009. The remaining 1,041 missiles will be procured after fiscal year 2009. During the low-rate initial production, the Army procured 164 PAC-3 missiles from 1998 to 2002 at $1.7 billion. The Army has completed the low-rate initial production and has been granted approval for full production of 208 PAC-3 missiles beginning in fiscal year 2003, at a total estimated cost of $714 million. We assessed the funding for the PAC-3 missile as green primarily because current and projected funding is consistent with the Army’s stated requirements to sustain the PAC-3 inventory. The program manager’s office has not identified any funding shortfalls for the missile. Funding has been approved for the production of 1,159 PAC-3 missiles through fiscal year 2009 at an average production rate of nearly 100 missiles per year. The total production cost of the 1,159 PAC-3 missiles equates to $4.3 billion. The remaining 1,041 missiles will be procured after fiscal year 2009. While we did not have sufficient data to definitively assess the wartime capability of the PAC-3 missile, Army officials indicated that it successfully fulfilled its wartime mission during Operation Iraqi Freedom, successfully hitting enemy targets within two missile shots. The PAC-3 has also completed the operational testing phase and has been approved for full production. The Guided Multiple Launch Rocket System Dual Purpose Improved Convention Munition (GMLRS-DPICM) is an essential component of the Army’s transformation. It upgrades the M26 series MLRS rocket and is expected to serve as the baseline for all future Objective Force rocket munitions. The Army plans to procure a total of 140,004 GMLRS rockets. There are currently no GMLRS rockets in inventory, but it was approved in March 2003 to enter low rate initial production to produce 108 missiles. We assessed the condition of the GMLRS as green because the system has demonstrated acceptable performance during the System Development and Demonstration Phase, and was approved to enter low rate initial production in March 2003. We assessed the program strategy for the GMLRS as green because the Army has developed a long-term program strategy for sustaining the GMLRS inventory, including procurement of a total of 140,004 missiles that will satisfy the total requirement. Of this total, the Army plans to procure 18,582 missiles by fiscal year 2009. The remaining 121,422 will be procured after fiscal year 2009. The Army approved low rate initial production for a total of 1,920 missiles through fiscal year 2005. The initial operational capability date is scheduled for 2nd quarter fiscal year 2006. The Army has also preplanned a product improvement to the GMLRS-DPICM called the GMLRS—Unitary. This improvement is in the concept development phase and is scheduled to begin a spiral System Development and Demonstration. The Army has not decided how many of the 1,920 initial production rockets will include the guided unitary upgrade. We assessed the funding for the GMLRS as green because current and projected funding is consistent with the Army’s stated requirements to sustain the GMLRS Munitions program. The GMLRS program is fully funded and properly phased for rapid acquisition. The Army plans to purchase a total of 140,004 GMLRS rockets for $11.7 billion. Of the 140,004 GMLRS rockets, the Army plans to procure 18,582 through fiscal year 2009 for $1.7 billion. The remaining 121,422 rockets will cost the Army approximately $10 billion. In March 2003, the system met all modified low rate initial production criteria to enter the first phase to produce 108 rockets for $36.6 million. Phases II and III will procure the remaining 1,812 rockets during fiscal year 2004 (786 rockets) and fiscal year 2005 (1,026 rockets) for $220.4 million. While we did not have sufficient data to definitively assess the wartime capability of the GMLRS, Army officials did not identify any specific capability concerns. The GMLRS-DPICM is expected to achieve greater range and precision accuracy. The upgraded improvement will reduce the number of rockets required to defeat targets out to 60 kilometers or greater, and reduce collateral damage. It is also expected to reduce hazardous duds to less than 1 percent. The F-16 is a compact, multi-role fighter with air-to-air combat and air-to-surface attack capabilities. The first operational F-16A was delivered in January 1979. The Air Force currently has 1,381 F-16 aircraft in its inventory, and the average age is about 15 years. The F-16B is a two-seat, tandem cockpit aircraft. The F-16C and D models are the counterparts to the F-16A/B, and incorporate the latest technology. Active units and many reserve units have converted to the F-16C/D. The Air Force plans to replace the F-16 with the F-35 Joint Strike Fighter beginning in 2012. We assessed the condition of the F-16 as green because mission capable rates have been near the current goal of 83 percent with mission capable rates for all of the Air Force’s Air Combat Command (ACC) F-16s ranging from 75 percent to 79 percent during the past 5 years. Although these rates are below the goal, officials said they were sufficient to provide flying hours for pilot training, and to meet operational requirements. In fiscal year 2002, the planned utilization rate, (i.e., the average number of sorties per aircraft per month) for ACC aircraft was 17.5 sorties per month, and the actual utilization was 17.7 sorties. Although the average age of the F-16 is about 15 years, there are no material deficiencies that would limit its effectiveness and reliability. Known and potential structural problems associated with aging and accumulated flying hours are being addressed through ongoing depot maintenance programs. We assessed the program strategy for the F-16 as green because the Air Force has developed a long-term program strategy for sustaining and replacing the F-16 inventory. The program should ensure that the aircraft remains a viable and capable weapons system throughout the FYDP. Subsequently, the Air Force intends to begin replacing the F-16 with the Joint Strike Fighter (F-35), which is already in development. We assessed the funding for the F-16 as yellow because current and projected funding is not consistent with the Air Force’s stated requirements to sustain and replace the F-16 inventory. There are potential shortfalls in the funding for depot maintenance programs and modifications during the next 3-5 years. Although funding has been programmed for this work, unexpected increases in depot labor rates have been significant, and additional funding may be required to complete the work. For fiscal year 2004, the Air Force included $13.5 million for the F-16 in its Unfunded Priority List. While we did not have sufficient data to definitively assess the wartime capability for the F-16, the aircraft has successfully fulfilled its recent wartime missions. F-16 fighters were deployed to the Persian Gulf in 1991 in support of Operation Desert Storm, and flew more sorties than any other aircraft. The F-16 has also been a major player in peacekeeping operations including the Balkans since 1993. Since the terrorist attack in September 2001, F-16s comprised the bulk of the fighter force protecting the skies over the United States in Operation Noble Eagle. More recently, F-16s played a major role in Afghanistan in Operation Enduring Freedom, and have performed well in combat in Operation Iraqi Freedom, in which the F-16 once again provided precision-guided strike capabilities and suppression of enemy air defenses. During Operation Iraqi Freedom, the Air Force deployed over 130 F-16s that contributed significantly to the approximately 8,800 sorties flown by Air Force fighter aircraft. The B-2 is a multi-role heavy bomber with stealth characteristics, capable of employing nuclear and conventional weapons. The aircraft was produced in limited numbers to provide a low observable (i.e., stealth) capability to complement the B-1 and B-52 bombers. Its unique stealth capability enables the aircraft to penetrate air defenses. The Air Force currently has 21 B-2 aircraft in its inventory, and the average age is about 9 years. The first B-2 was deployed in December 1993, and currently all B-2s in the inventory are configured with an enhanced terrain-following capability and the ability to deliver the Joint Direct Attack Munition and the Joint Stand Off Weapon. We assessed the condition of the B-2 as yellow because the B-2 did not meet its mission capable goal of 50 percent. Officials said that the aircraft itself is in good condition, but it is the maintainability of its stealth characteristics that is driving the low mission capable rates. Officials pointed out that despite low mission capable rates the B-2 has been able to meet requirements for combat readiness training and wartime missions. For example, four B-2 aircraft were deployed and used during Operation Iraqi Freedom, and maintained a mission capable rate of 85 percent. Mission capable rates have improved slightly, and officials said that recent innovations in low observable maintenance technology and planned modifications are expected to foster additional improvement. We assessed the program strategy for the B-2 as green because the Air Force has developed a long-term program strategy for sustaining the B-2 inventory. Program plans appear to ensure the viability of this system through the Future Years Defense Plan. Procurement of this aircraft is complete. The Air Force plans to maintain and improve its capabilities, ensuring that the B-2 remains the primary platform in long-range combat aviation. We assessed the funding for the B-2 as green because current and projected funding is consistent with the Air Force’s stated requirements to sustain the B-2 inventory. The programmed funding should allow execution of the program strategy to sustain, maintain, and modify the system through the Future Years Defense Plan. The B-2 is of special interest to the Congress, which requires an annual report on this system, including a schedule of funding requirements through the Future Years Defense Plan. No items specific to the B-2 were included in the Air Force’s fiscal year 2004 Unfunded Priority List. While we did not have sufficient data to definitively assess the wartime capability for the B-2, the aircraft has successfully fulfilled its wartime missions despite current condition weaknesses. The Air Force demonstrated the aircraft’s long-range strike capability by launching missions from the United States, striking targets in Afghanistan, and returning to the States. More recently, the Air Force deployed four B-2 aircraft to support Operation Iraqi Freedom, where they contributed to the 505 sorties flown by bombers during the conflict. The B-2 Annual Report to the Congress states that the B-2 program plan will ensure that the B-2 remains the primary platform in long-range combat aviation. The C-5 Galaxy is the largest of the Air Force’s air transport aircraft, and one of the world’s largest aircraft. It can carry large cargo items over intercontinental ranges at jet speeds and can take off and land in relatively short distances. It provides a unique capability in that it is the only aircraft that can carry certain Army weapon systems, main battle tanks, infantry vehicles, or helicopters. The C-5 can carry any piece of army combat equipment, including a 74-ton mobile bridge. With aerial refueling, the aircraft’s range is limited only by crew endurance. The first C-5A was delivered in 1969. The Air Force currently has 126 C-5 aircraft in its inventory, and the average age is about 26 years. We assessed the condition of the C-5 as yellow because it consistently failed to meet its mission capable goal of 75 percent; however, mission capable rates have been steadily improving and, in April 2003, active duty C-5s exceeded the goal for the first time. Program officials pointed out that, although the total fleet has never achieved the 75 percent goal, there has been considerable improvement over time, with the rate rising from about 42 percent in 1971 to about 71 percent in 2003. The Air Force Scientific Advisory Board has estimated that 80 percent of the airframe structural service life remains. Furthermore, Air Force officials said that the two major modification programs planned, the avionics modernization program and reliability enhancement and re-engining program, should significantly improve mission capable rates. We assessed the program strategy for the C-5 as green because the Air Force has developed a long-term program strategy for sustaining and modernizing the aging C-5 inventory. The Air Force has planned a two-phase modernization program through the future years defense program that is expected to increase the aircraft’s mission capability and reliability. The Air Force plans to modernize the C-5 to improve aircraft reliability and maintainability, maintain structural and system integrity, reduce costs, and increase operational capability. Air Force officials stated that the C-5 is expected to continue in service until about 2040 and that, with the planned modifications, the aircraft could last until then. As an effort to meet strategic airlift requirements, the Air Force has contracted to buy 180 C-17s, will retire 14 C-5s by fiscal year 2005, and may retire additional aircraft as more C-17s are acquired. We assessed the funding for the C-5 as yellow because current and projected funding is not consistent with the Air Force’s stated requirements to sustain and modernize the aging C-5 inventory. According to officials, the program lost production funding because of problems during the early stage of the program. Currently 49 aircraft are funded for the avionics program through the Future Years Defense Plan. For fiscal year 2004, the Air Force included $39.4 million in its Unfunded Priority List to restore the program to its prior timeline. While we did not have sufficient data to definitively assess the wartime capability of the C-5, Air Force officials indicated that the aircraft has successfully fulfilled its recent wartime missions. The Air Force has not noted any factors or capability concerns that would prevent the C-5 from effectively performing its wartime mission. The KC-135 is one of the oldest airframes in the Air Force’s inventory, and represents 90 percent of the tanker fleet. Its primary mission is air refueling, and it supports Air Force, Navy, Marine Corps, and allied aircraft. The first KC-135 was delivered in June 1957. The original A models have been re-engined, modified, and designated as E, R, or T models. The E models are located in the Air Force Reserve and Air National Guard. The total inventory of the KC-135 aircraft is 543, and the average age is about 43 years. We assessed the condition of the KC-135 as yellow because it maintained mission capable rates at or near the 85 percent goal despite the aircraft’s age and potential corrosion of its structural components. Although the aircraft is about 43 years old, average flying hours are slightly over a third of its expected life of 39,000 hours, and an Air Force study projected the KC-135 would last until about 2040. All KC-135s were subjected to an aggressive corrosion preventive program and underwent significant modifications, including replacement of the cockpit. Nevertheless, citing increases in the work needed during periodic depot maintenance, costs, and risk of the entire fleet being grounded, the Air Force decided to accelerate recapitalization from 2013 to about 2006. We assessed the program strategy for the KC-135 as red because the Air Force has developed a long-term program strategy to modernize the aging KC-135 tanker fleet, but it has not demonstrated the urgency of acquiring replacement aircraft and has not defined the requirements for the number of aircraft that will be needed. As we stated in testimony before the House Committee on Armed Services, Subcommittee on Projection Forces, the department does not have a current, validated study on which to base the size and composition of either the current fleet or a future aerial refueling force. The Air Force has a large fleet of KC-135s (about 543), which were flown about 300 hours annually between 1995 and September 2001. Since then utilization is about 435 hours per year. Furthermore, the Air Force has a shortage of aircrews to fly the aircraft it has. In Operation Iraqi Freedom, a relatively small part of the fleet was used to support the conflict (149 aircraft). Without a definitive analysis, it is difficult to determine if recapitalization is needed and what alternatives might best satisfy the requirement. We assessed the funding of the KC-135 as red because current and future funding is not consistent with the Air Force stated requirements to sustain and modernize the KC-135 tanker fleet. The Air Force has not addressed recapitalization funding in the current defense budget or in the Future Years Defense Plan. The Air Force plans to begin acquiring new aircraft almost immediately, but does not want to divert funding from other programs to pay for them. The Air Force proposed a unique leasing arrangement with Boeing that will provide new tankers as early as 2006. There remains controversy over the lease terms, aircraft pricing, and how the Air Force will pay for the lease. While we did not have sufficient data to definitively assess the wartime capability of the KC-135, Air Force officials indicated that the aircraft has successfully fulfilled its recent wartime missions despite current condition problems. The KC-135 comprised 149 of the 182 tanker aircraft the Air Force used during Operation Iraqi Freedom, and those aircraft flew almost 6,200 sorties and offloaded over 376 million pounds of fuel. The KC-135 maintained a mission capable rate above the current goal of 85 percent during Operation Iraqi Freedom. The CALCM is an accurate long-range standoff weapon with an adverse weather, day/night, and air-to-surface capability. It employs a global positioning system coupled with an inertial navigation system. It was developed to improve the effectiveness of the B-52 bombers and became operational in January 1991. Since initial deployment, an upgraded avionics package, including a larger conventional payload and a multi-channel global positioning system receiver, has been added on all of the missiles. The CALCM total inventory is about 478, and the average age is about 15 years. We assessed the condition of the CALCM as green because the CALCM has demonstrated high reliability. The Air Force has not noted any chronic factors or problems that limit the effectiveness or reliability of the missile. However, according to officials, the diagnostics test equipment needs to be upgraded because it is old and was designed to support less sophisticated missiles. Currently, the Air Force uses the same test equipment for both the conventional and nuclear weapons. We assessed the program strategy for the CALCM as green because the Air Force has a long-term program strategy for sustaining and modernizing its current inventory of cruise missiles. The Air Force does not have any future plans to convert or purchase any additional nuclear missiles. The Joint Chief of Staff must authorize the use of the conventional weapons and approve the program in order to procure additional missiles. As the inventory is depleted, the conventional weapon will be replaced with other systems with similar capabilities, such as the Joint Air-to-Surface Standoff Missile, which is currently under development. The Joint Air-to-Surface Standoff Missile will not be a one-for-one replacement for the conventional missile. We assessed the funding for the CALCM as green because current and projected funding is consistent with the Air Force stated requirements to sustain and modernize its cruise missile inventory. Procurement of the cruise missile is complete, and no funding has been provided for research and development or procurement in the fiscal year 2003 budget. While we did not have sufficient data to definitively assess the wartime capability for the CALCM, Air Force officials indicated that it successfully fulfilled its recent wartime missions. These officials indicated that the cruise missile played a significant role in the initial strikes during Operation Iraqi Freedom. During Operation Iraqi Freedom, 153 missiles were expended, and the version that is designed to penetrate hard targets was first employed. The Joint Direct Attack Munition is a guidance tail kit that converts existing unguided bombs into accurate, all-weather “smart” munitions. This is a joint Air Force and Navy program to upgrade the existing inventory of 2,000 and 1,000-pound general-purpose bombs by integrating them with a guidance kit consisting of a global positioning system-aided inertial navigation system. In its most accurate mode, the system will provide a weapon circular error probable of 13 meters or less. The JDAM first entered the inventory in 1998. The total projected inventory of the JDAM is about 92,679, and the current average age is less than 5 years. Future upgrades will provide a 3-meter precision and improved anti-jamming capability. We assessed the condition of the JDAM as green because it consistently met its reliability goal of 95 percent. The munitions are used as they become available; therefore, no maintenance is involved. Although the Air Force does not monitor the condition of munitions, they keep track of each component of the guidance kit, which is tracked for serviceability. The kit is under a 20-year warranty. The munitions are purchased serviceable and are tested before used by the operational units. In addition to high reliability, the JDAM can be purchased at a low cost and are being delivered more than three times as fast as planned. We assessed the program strategy for the JDAM as green because the Air Force has a long-term program strategy for sustaining and maintaining its production of the munitions. The Joint Direct Attack Munition requirements are driven by assessments of war readiness and training requirements. Currently, Boeing is in full production and is increasing its production to about 2,800 per month for the Air Force and Navy, an increase from approximately 700–900 a month. The second production line is up and running. We assessed the funding for the JDAM as green because current and projected funding is consistent with the Air Force’s stated requirements to sustain and maintain production of the munitions. The President’s fiscal year 2003 budget provided funding for the procurement of the system through the future years defense plan. Air Force officials stated that the munitions have all the funding it needs; however, it is limited by the production capability of its contractor, Boeing. While we did not have sufficient data to definitively assess the wartime capability of the JDAM, Air Force officials indicated that it has successfully fulfilled its recent wartime missions. The weapon system played a role in operations in Kosovo, Afghanistan, and Iraq. According to the Air Force, the weapon has operationally proven to be more accurate, reliable, and effective than predicted. The Air Force has not noted any factors or capability concerns that would prevent the Joint Direct Attack Munitions from effectively fulfilling its wartime mission. Navy Destroyers are multi-mission combatants that operate offensively and defensively, independently, or as part of carrier battle groups, surface action groups, and in support of Marine amphibious task forces. This is a 62-ship construction program, with 39 in the fleet as of 2003. The average age of the ships is 5.8 years, with the Arleigh Burke (DDG-51) coming into service in 1991. The follow-on program is the DD(X), with initial construction funding in 2005 and delivery beginning 2011. We assessed the condition of the DDG-51 as yellow because work programmed for scheduled maintenance periods is often not accomplished. Because of budget limitations for each ship’s dry-dock period and a Navy effort to level port workloads and provide stability in the industrial base, maintenance items are often cut from the planned work package during dry-dock periods. Those items are then deferred to the next scheduled docking or accomplished as possible in the ship’s continuous maintenance phase. Deferring maintenance affects corrosion issues, particularly the ship’s hull. Engineering and combat systems have priority for resources with desirable, though not necessarily essential, crew quality of life improvements deferred to a later time. The Navy balances risk between available resources and deferring maintenance to make the most cost-effective decision and ensure ships deploy without or with minimal safety or combat system deficiencies. We assessed the program strategy for the DDG-51 as yellow because the Navy has developed a long-term program strategy for sustaining and upgrading the DDG-51 fleet; however, budget cuts in the Navy’s shipbuilding program affect upgrades to the warfighting systems and may lead to potential problems in the industrial base when transitioning from DDG to DD(X) ships. Navy officials noted that these budget cuts prevent them from buying the latest available technologies. These technologies are usually in warfighting systems, such as command and control and system integration areas. Management of the transition period from DDG to DD(X) shipbuilding between 2005 and 2008 will be key to avoid problems from major fluctuations in the workload and workforce requirements. We assessed the funding for the DDG-51 as yellow because current and projected funding is not consistent with the Navy’s statement requirements to sustain and upgrade the DDG-51 fleet. Lack of multiyear budget authority creates budget inefficiencies because the Navy is required to spend supplemental and 1-year funds within the year in which it is appropriated. The Navy attempts to reduce ship maintenance costs by leveling the maintenance workload for ship contractors, which provides the Navy and contractors greater flexibility and predictability. The lack of multiyear budgeting and the need to spend supplemental and 1-year funds in the current year limits that effort. Ports are not equipped or manned to accomplish the volume of work required in the time-span necessary to execute 1-year appropriations. In some cases, differences between the Navy estimate of scheduled maintenance costs and the contractor bid to do the work requires cuts to the ship’s planned work package, further contributing to the deferred maintenance backlog. While we did not have sufficient data to definitively assess the wartime capability for the DDG-51, Navy officials raised a number of capability concerns. Specifically, these officials indicated that the DDG-51 has successfully fulfilled its recent wartime mission, but with some limitations such as communications shortfalls and force protection issues. Although the DDG-51 class is the newest ship in the fleet with the most up to date technologies, fleet officers said there is insufficient bandwidth for communications during operations. Navy officials cited effective management of available communications assets rather than the amount of available bandwidth as the more immediate challenge. In the current threat environment, force protection issues remain unresolved. The use of the Ridged Hull Inflatable Boat (RHIB) during operations at sea without on- board crew-served weapons and hardening protection concerns commanders. The small caliber of sailors’ personal and crew-served weapons limits their effectiveness against the immediate and close-in threat from small boat attack. Navy FFG-7 Frigates are surface combatants with anti-submarine warfare (ASW) and anti-air warfare (AAW) capabilities. Frigates conduct escort for amphibious expeditionary forces, protection of shipping, maritime interdiction, and homeland defense missions. There are 32 FFGs in the fleet, with 30 programmed for modernization. The average age of the fleet is 19 years. The FFGs are expected to remain in the fleet until 2020. We assessed the condition of the FFG-7 as yellow because work programmed for scheduled maintenance periods is often not accomplished. Because of budget limitations for each ship’s dry-dock period and a Navy effort to level port workloads and provide stability in the industrial base, maintenance items are often cut from the planned work package during dry-dock periods. These items are then deferred to the next scheduled docking or accomplished as possible in the ship’s continuous maintenance phase. Deferring maintenance affects corrosion issues, particularly the ship’s hull. Engineering and combat systems have priority for resources with desirable, though not necessarily essential, crew quality of life improvements deferred to a later time. The Navy balances risk between available resources and deferring maintenance to make the most cost-effective decision and ensure ships deploy without or with minimal safety or combat system deficiencies. There is the additional burden of maintaining older systems on the frigates. We assessed the program strategy for the FFG-7 as yellow because the Navy has developed a long-term program strategy for sustaining and modernizing the FFG-7 fleet; however, the program is susceptible to budget cuts. The modernization program is essential to ensure the frigates’ continued viability. There is also uncertainty about the role frigates will play as the Littoral Combat Ship is developed. We assessed the funding for the FFG-7 as yellow because current and projected funding is not consistent with the Navy’s stated requirements to sustain and modernize the FFG-7 fleet. Uncertainty about modernization program funding and budget inefficiencies created by the lack of multiyear budget authority and the requirement to spend supplemental and 1-year funds when they are appropriated. The Navy attempts to reduce ship maintenance costs by leveling the maintenance workload for ship contractors, which provides the Navy and contractors greater flexibility and predictability. The lack of multiyear budget authority and the need to spend supplemental and 1-year funds in the current year in which they are appropriated limits that effort. Ports are not equipped or manned to accomplish the volume of work required in the time span necessary to execute 1-year appropriations. In some cases, differences between the Navy estimate of scheduled maintenance costs and the contractor bid to do the work requires cuts to the ship’s planned work package, further contributing to the deferred maintenance backlog. While we did not have sufficient data to definitively assess the wartime capability of the FFG-7, Navy officials identified a number of capability concerns including communications shortfalls and potential vulnerabilities to air warfare. The frigate’s ability to operate in a battle group environment is limited by insufficient bandwidth and lack of command circuits for communications requirements. The Navy shut down the frigate’s missile launcher because of excessive maintenance costs. Ship commanders in the fleet expressed concern about potentially deploying with only one of three compensating systems for anti-air warfare missions, the on-board 76-mm rapid-fire gun (CWIS-1B, Close-In Weapons System). Officials in the program manager’s office stated fielding plans were in place for the other two systems, the MK53 Decoy Launch System, called NULKA, and the Rolling Airframe Missile (RAM). These systems will help mitigate the frigate’s vulnerability after shutting down the missile launcher. The frigate’s value to surface groups operating independently of carriers is as a helicopter platform. The F/A-18 is an all-weather fighter and attack aircraft expected to fly in the fleet to 2030. There are six models in the current inventory of 875: A, 178; B, 30; C, 405; D, 143; E, 55; and F, 64. Average age in years is: A, 16.4; B, 18.0; C, 10.6; D, 10.1; E, 1.7; and F, 1.5. The Navy plans to eventually replace the F/A-18 with the Joint Strike Fighter. We assessed the condition of the F/A-18 as yellow because it consistently failed to meet mission capable and fully mission capable goals of 75 percent and 58 percent, respectively. Squadrons that are deployed or are training for deployment generally exceed these goals. Maintaining the aircraft is increasingly difficult because of personnel shortfalls, increased flying requirements, and lack of ground support equipment. Navy depot personnel indicated that the availability of spare parts remains the largest issue in repairing and returning aircraft to the fleet. We assessed the program strategy for the F/A-18 as yellow because the Navy has developed a long-term program strategy for sustaining and maintaining the F/A-18 fleet; however, it lacks a common baseline capability for all aircraft. Navy officials stated managing the configuration of the various versions of the aircraft is challenging. Each version of the aircraft has different repair parts, unique on-board equipment, and specially trained maintainers and pilots. To increase the service life of the aircraft, the Navy initiated the Center Barrel Replacement (CBR) program. CBR replaces those parts of the F/A-18 fuselage that have the greatest stress placed on them from landing on aircraft carriers. The Navy is also initiating a Navy/Marine Tactical Air Integration program that combines low flying-hour / low carrier-landing aircraft for carrier use and high flying- hour / high carrier-landing aircraft for shore basing. If CBR is adequately funded and the Tactical Air Integration initiative proceeds, the F/A-18 will remain a viable system into the future. We assessed the funding for the F/A-18 as yellow because current and projected funding is not consistent with the Navy’s stated requirements to sustain and maintain the F/A-18 fleet. The Navy intends to fly the F/A-18A-D models until 2020 and the E/F models to at least 2030. Funding for ground support equipment for the A–D models was eliminated, leaving operators and program managers to find resources elsewhere. Program dollars are often drawn back, pushing modernization to the out years. This is a problem for the CBR program that is $72 million short in the current Future Years Defense Plan. Navy personnel state that the CBR program must be fully funded to meet the number of aircraft required to support the Tactical Air Integration initiative and standards in the new Fleet Response Plan. While we did not have sufficient data to definitively assess the wartime capability for the F/A-18, Navy officials indicated that the aircraft has successfully fulfilled its wartime missions despite current condition problems. The A-D models, along with the E/F models coming into the inventory, provide a multi-capable aircraft for the many roles the war fighting commanders require. These multi-role capabilities were demonstrated during Operation Iraqi Freedom with the F/A-18 performing air, ground attack, and refueling missions. Navy officials stated that they will do whatever is necessary to accomplish the mission, but raised concerns that maintenance costs are increasing due to current conditions problems. Specifically these officials stated that increased maintenance man hours per aircraft sortie, increased cannibalization rates, and decreased readiness rates are creating more stress on the aircraft and the personnel who fly and maintain them. The EA-6B is an integrated electronic warfare aircraft system combining long-range, all-weather capabilities with advanced electronic countermeasures. Its primary mission is to support strike aircraft and ground troops by jamming enemy radar, data links, and communications. The current inventory is 121 with an average age of 20.7 years. The follow-on aircraft is the E/A-18G Growler Airborne Electronic Attack aircraft, a variant of the F/A-18 E/F. We assessed the condition of the EA-6B as yellow because it consistently failed to meet the mission capable goal of 73 percent. However, squadrons training for deployment or those that are deployed generally exceed this goal. Fatigue life expenditure (FLE), the predictable rate of wear and deterioration of wing center sections and outside wing panels, is a critical problem and has caused aircraft to be temporarily grounded or placed under flying restrictions to mitigate risk to the aircraft. Wing center sections are that part of the plane where the wings and fuselage attach. Outer wing panels are that part of the wing that fold up when the plane is onboard carriers. The Navy is aggressively managing the problem and has programs in place to replace these items in the near term. We assessed the program strategy for the EA-6B as yellow because the Navy has developed a long-term program strategy for upgrading the EA-6B fleet; however, aircraft capability requirements may not be met in the future. The Improved Capability 3rd Generation (ICAPIII) upgrade is a significant technology leap in jamming capabilities over the current second-generation capability. ICAPIII will counter threats through 2015 and provides an advanced jamming capability, accurate target location, and full circle coverage. By 2007, 30 percent of the fleet will be ICAPIII equipped. The Navy plans for the follow-on EA-18G Growler to join the fleet between 2008 and 2012. The Navy purchase plan calls for 90 aircraft with over two- thirds (65 aircraft) procured by 2009. We assessed the funding for the EA-6B as red because current and projected funding is not consistent with the Navy’s stated requirements to sustain and upgrade the EA-6B fleet. The Navy relies upon additional congressional appropriations rather than requesting funds to meet program requirements. In fiscal year 2003, the Congress appropriated an additional 17 percent ($40 million) over DOD’s request for the EA-6B. The Navy is not funding modernization programs to the stated requirements. The Navy’s requirement for the ICAPIII electronic attack upgrade is 42 systems, although the Navy is only funding 35 systems. According to the program manager, funding for replacing the EA-6B’s outside wing panels is still uncertain. While we did not have sufficient data to definitively assess the wartime capability for the EA-6B, Navy officials indicated that the aircraft has successfully fulfilled its wartime missions with some limitations. Potential funding shortfalls and capability limitations may affect the aircraft’s ability to perform its mission. Only 98 out of 108 aircraft in the Navy’s EA-6B inventory are available to the fleet. Current EA-6B capabilities can meet the threat, although without an increase in the number of ICAPIII capable aircraft, the Navy may not be able to meet future threats. According to Navy officials, there is an impending severe impact on warfighting capabilities if the Navy does not receive fiscal year 2003 procurement funding for outside wing panels as requested. Specifically, the combination of the expected wear and tear on the panels and the normal aircraft attrition rate could reduce the total EA-6B inventory by 16 in 2005. The LPD-4 ships are warships that embark, transport, and land elements of a Marine landing force and its equipment. There are currently 11 in the inventory with an average age of 35 years. These ships are expected to remain in the fleet until 2014. The San Antonio-class LPD-17 (12-ship construction program, LPD-17 through LPD-28) will eventually replace the LPD-4. We assessed the condition of the LPD-4 as yellow because work programmed for scheduled maintenance periods is often not accomplished. Because of budget limitations for each ship’s dry-dock period and a Navy effort to level port workloads and provide stability in the industrial base, maintenance items are often cut from the planned work package during dry-dock periods. These items are then deferred to the next scheduled docking or accomplished as possible in the ship’s continuous maintenance phase. Deferring maintenance increases corrosion problems, particularly for the ship’s hull. There are consistent problems with the engagement system for on-board weapons and the hull, mechanics, and electrical (HME) systems associated with the ship’s combat support system. The age of the LPD-4 fleet directly contributes to the deteriorating condition of the ships, particularly the hydraulic systems. The Navy balances risk between available resources and deferring maintenance to make the most cost-effective decision and ensure ships deploy without or with minimal safety or combat system deficiencies. We assessed the program strategy for the LPD-4 as green because the Navy has developed a long-term program strategy to sustain and replace amphibious dock ships and improve support to Marine amphibious forces. The Extended Sustainment Program was initiated because of delay in delivery of the new LPD-17 class ships. The program will extend the service life of 6 of 11 ships for an average of 7.3 years to the 2009–2014 time frame. The program consists of 37 prioritized work items endorsed by the Navy. The follow-on LPD-17 ship construction program incorporates innovative design and total ownership cost initiatives; however, no modernization or upgrades are planned in the construction timeline from 1999 to 2013. We assessed the funding for the LPD-4 as yellow because current and projected funding is not consistent with the Navy’s stated requirements to sustain and replace amphibious dock ships. The age and decommissioning schedule for the ships means funding priorities are placed elsewhere. The Navy is seeking cost savings through efforts to level the industrial base in ports and provide predictability and management flexibility for programmed maintenance work. A significant limitation in that effort is the inability to use multiyear budgeting and the need to spend supplemental and 1-year funds in the year of appropriation. Ports are often not equipped and manned to accomplish the volume of work required in the time-span necessary to execute 1-year budgets. While we did not have sufficient data to definitively assess the wartime capability for the LPD-4, Navy officials did not identify any specific capability concerns. These officials indicated that the LPD-4 fulfilled its recent wartime missions of transporting and moving Marines and their equipment ashore. The Standard Missile-2 (SM-2) is a medium to long-range, shipboard surface- to-air missile with the primary mission of fleet area air defense and ship self-defense, and a secondary mission of anti-surface ship warfare. The Navy is currently procuring only the Block IIIB version of this missile. While the actual number in the inventory is classified, the Navy plans to procure 825 Block IIIB missiles between fiscal years 1997 and 2007. Currently, 88 percent of the inventory is older than 9 years. A qualitative evaluation program adjusted the initial 10-year service life out to 15 years. We assessed the condition of the Standard Missile–2 as red because it failed to meet the asset readiness goal of 87 percent and only 2 of 5 variants achieved the goal in fiscal year 2002. The asset readiness goal is the missile equivalent of mission capable goals. The percent of non-ready for issue missiles (currently at 23 percent of the inventory) will increase because of funding shortfalls. We assessed the program strategy for the Standard Missile-2 as yellow because the Navy has developed a long-term program strategy for upgrading the Standard Missile-2 inventory; however, the Navy’s strategy mitigates risk with complementary systems as the SM-2 inventory draws down and upgrades to counter known threats are cut from the budget. In 2002, the Navy cancelled production of the most capable variant at the time, the SM-2 Block IVA. Currently, the most capable missile is the SM-2 Block IIIB, which is the only variant in production. This missile will be the main anti-air warfare weapon on board Navy ships into the next decade. Improved Block IIIB missiles will be available in 2004. The SM-6 Extended Range Active Missile (ERAM) is programmed for initial production in 2008 and will be available to the fleet in 2010. We assessed the funding for the Standard Missile-2 as red because current and projected funding is not consistent with the Navy’s stated requirements to upgrade the Standard Missile-2 inventory. There is a $72.6 million shortfall for maintenance and a shortfall of approximately $60 million for procurement in the current Future Years Defense Plan. While we did not have sufficient data to definitively assess the wartime capability of the Standard Missile-2, Navy officials indicated that it successfully fulfilled its recent wartime missions but with some limitations. Block IIIB and improved Block IIIB missiles successfully counter the threats they were designed to counter. However, the most capable variant in the current inventory cannot handle the more sophisticated known air threats. The Navy lost a capability to intercept extended range and ballistic missiles when development of the Block IVA variant was cancelled. The improved Block IIIB missiles will mitigate some risk until the SM-6 ERAM is deployed in 2010. Further, Navy officials stated that the Navy accepts an element of risk until the SM-6 is deployed because the threat is limited in both the number of missiles and the scenarios where those missiles would be employed. Officials also described the Navy’s anti-air warfare capability as one of complementary systems and not singularly dependent on the SM-2 missile. The Navy successfully increased the deployment of these missiles to the fleet for the recent operations in Afghanistan and Iraq, but the growing shortage of ready-for-issue missiles in future years could severely limit the Navy’s ability to meet future requirements. The Tomahawk Cruise Missile is a long-range, subsonic cruise missile used for land attack warfare, and is launched from surface ships and submarines. The current inventory is 1,474 missiles, with an average age of 11.88 years and a 30-year service life. During Operation Iraqi Freedom, 788 Tomahawk’s were expended. The follow-on Tactical Tomahawk (TACTOM) is scheduled to enter the inventory in 2005. We assessed the condition of the Tomahawk Cruise Missile as green because it consistently met asset readiness goals in recent years. The asset readiness goal is classified. We assessed the program strategy for the Tomahawk Cruise Missile as red because the Navy has developed a long-term program strategy for upgrading the Tomahawk Cruise Missile inventory; however, the future inventory level will not be determined until funding questions are resolved. During Operation Iraqi Freedom, 789 Tomahawks were expended with a remaining inventory of 1,474. The replenishment missiles are all programmed to be the new Tactical Tomahawk missile. Even when funding is appropriated and executed this fiscal year, the first available date for new missiles entering the inventory will be late 2005–2006. A remanufacturing program planned for 2002–2004 is upgrading the capabilities of older missiles. There are 249 missiles remaining to be upgraded. We assessed the funding for the Tomahawk Cruise Missile as red because current and projected funding is not consistent with the Navy’s stated requirements to replenish the inventory and new production is unresolved. Inventory replenishment funding was authorized by the Congress and, at the time of our review, was in conference to resolve differences between the two bills. While we did not have sufficient data to definitively assess the wartime capability for the Tomahawk Cruise Missile, Navy officials indicated that it has successfully fulfilled its wartime missions during recent operations in Afghanistan and Iraq. Improved Tomahawks came into the inventory in 1993 and provided enhanced accuracy on targets. The newest variant, the Tactical Tomahawk (TACTOM), is scheduled to come into the inventory in 2005 and improves the missile with an upgraded guidance system and in- flight re-programming capability. This upgrade program is also expected to lower the missile’s production unit and life-cycle support costs. The AH-1W Super Cobra provides en route escort and protection of troop assault helicopters, landing zone preparation immediately prior to the arrival of assault helicopters, landing zone fire suppression during the assault phase, and fire support during ground escort operations. There are 193 aircraft in the inventory with an average age of 12.6 years. We assessed the condition of the AH-1W as yellow because it consistently failed to meet its mission capable goals from fiscal year 1998 to fiscal year 2002. Although Camp Pendleton and Camp Lejeune AH-1W maintainers cited insufficient spare parts and cannibalization as problems, overall, operators were always positive in their comments about the condition of the AH-1W. Condition concerns will be remedied in the near term by the AH-1W upgrade program that is proceeding as scheduled with an October 1, 2003, anticipated start date. We assessed the program strategy for the AH-1W as green because the Marine Corps has developed a long-term program strategy for upgrading the AH-1W helicopter to the AH-1Z, achieving 85 percent commonality with the UH-1Y helicopter fleet. Estimated savings of $3 billion in operation and maintenance costs over the next 30 years have been reported. Additionally, the upgrade program will enhance the helicopter’s speed, maneuverability, fuel capacity, ammunition capacity, and targeting systems. We assessed the funding for the AH-1W as green because current and projected funding is consistent with the Marine Corps’ stated requirements to sustain and upgrade the AH-1W fleet. Although we assessed funding as green, Marine Corps officials at Camp Pendleton cited the need for additional funding for spare parts and noted that cost overruns have occurred in recent years for the AH-1W upgrade program. While we did not have sufficient data to definitively assess the wartime capability of the AH-1W, Marine Corps officials indicated that it successfully fulfilled its recent wartime missions but with some limitations. Specifically, prior to Operation Iraqi Freedom, Marine Corps operators at Camp Pendleton stated that the AH-1W’s ammunition and fuel capacity was insufficient for some operations, such as Afghanistan. The AH-1Z upgrade program, however, will address these concerns. The Sea Knight helicopter provides all weather, day/night, night-vision capable assault transport of combat troops, supplies and equipment during amphibious and subsequent operations ashore. There are 226 aircraft in the inventory. The CH-46E is more than 30 years old. The MV-22 Osprey is the planned replacement aircraft for the CH-46E. We assessed the condition of the CH-46E as red because it consistently failed to meet mission capable goals between fiscal year 1998 and fiscal year 2002. The operational mean time between failures decreased from 1.295 hours to 0.62 hours during the course of our review. Marine Corps officials cited concern over the aircrafts age and the uncertainty about the fielding of the MV-22 to replace the Sea Knight. Marine Corps officials called the current maintenance programs critical to meeting condition requirements. We assessed the program strategy for the CH-46E as yellow because the Marine Corps has developed a long-term program strategy to sustain and replace the CH-46E fleet. The sustainment strategy, dated August 19, 2003, outlines the service’s plans to sustain the CH-46E until retirement in 2015 or longer. However, according to press reports, DOD has decided to reduce the purchase of replacement systems by about 8 to 10 aircraft over the next few years. If DOD buys fewer replacement systems, the service will have to adjust the sustainment strategy to retain additional CH-46E aircraft in its inventory longer. We assessed the funding for the CH-46E as red because current and projected funding is not consistent with the Marine Corps’ stated requirements to sustain and replace the CH-46E fleet. Marine Corps officials asserted continued funding for maintaining the CH-46E is essential. The fiscal year 2004 budget request included a request for funding of safety improvement kits, long-range communications upgrade, aft transmission overhaul, and lightweight armor. The Navy lists CH-46E safety improvement kits as a $4 million unfunded requirement. While we did not have sufficient data to definitively assess the wartime capability of the CH-46E, Marine Corps raised a number of specific capability concerns. Specifically, these officials stated that the intended mission cannot be adequately accomplished due to lack of payload. The CH-46E has lost 1,622 pounds of lift since its fielding over 35 years ago due to increased weight and can only carry a 12-troop payload on a standard day. More recently, Marine Corps officials rated the performance of the CH-46E during Operation Iraqi Freedom as satisfactory despite these lift limitations. The AAV is an armored, fully-tracked landing vehicle that carries troops in water operations from ship to shore through rough water and surf zone, or to inland objectives ashore. There are 1,057 vehicles in the inventory. The Marine Corps plans to replace the AAV with the Expeditionary Fighting Vehicle (formerly the AAAV—Advanced Amphibious Assault Vehicle). We assessed the condition of the AAV as yellow because of its age and the fact that the Marine Corps plans to upgrade only 680 of the 1,057 AAVs currently in the inventory. Furthermore, the planned upgrade program will only restore the vehicle to its original operating condition rather than upgrading it to perform beyond its original operating condition. We could not base our assessment of the condition on readiness rates in relation to the readiness rate goals because the Marine Corps did not provide sufficient trend data. Marine Corps officials at Pacific Command stated that the heavy usage of the AAV during Operation Iraqi Freedom and the long fielding schedule of the replacement vehicle present significant maintenance challenges. However, we assessed the condition yellow instead of red based on favorable comments about the current condition of the AAV from operators and maintainers. We assessed the program strategy for the AAV as yellow because the Marine Corps has developed a long-term program strategy for overhauling the AAV; however, the program only restores the vehicle to its original operating condition and does not upgrade the vehicles beyond original condition. The Marine Corps initiated a Reliability, Availability and Maintenance/Rebuild to Standard (RAM/RS) upgrade program in 1998 to restore capabilities and lengthen the expected service life of the AAV to sustain the vehicles until the replacement system, the Expeditionary Fighting Vehicle (formerly the Advanced Amphibious Assault Vehicle), can be fielded. The RAM/RS is expected to extend the AAV service life an additional 10 years. These vehicles will be needed until the replacement vehicles can be fielded in 2012. However, the procurement of the replacement vehicles has reportedly already been delayed by 2 years. We assessed the funding for the AAV as yellow because current and projected funding is not consistent with the Marine Corps’ requirements to upgrade the AAV inventory. Requested funding rose from $13.5 million in fiscal year 1998 to $84.5 million in fiscal year 1999 as the Marines initiated the RAM/RS program. The requested funding level declined to $66.2 million by fiscal year 2002. The Marine Corps identified a $48.9 million unfunded program in the fiscal year 2004 budget request to extend RAM/RS to more vehicles. Marine Corps officials are concerned reconstitution of the vehicles from Operation Iraqi Freedom will not include funding for vehicles returning from Operation Iraqi Freedom for the RAM/RS program. While we did not have sufficient data to definitively assess the wartime capability of the AAV, Marine Corps officials indicated that it has successfully fulfilled its wartime missions but with some limitations. While these officials cited the AAV as integral to ground operations during Operation Iraqi Freedom, they noted specific stresses placed on the vehicles. For example, AAVs deployed to Operation Iraqi Freedom traveled, on average, over 1,000 miles each, a majority of those miles under combat conditions. Those conditions added about 5 years worth of miles and wear and tear to the vehicles over a 6- to 8-week period. In addition, prior to Operation Iraqi Freedom, Marine Corps officials at Camp Lejeune highlighted problems they encountered with obtaining enhanced armor kits to protect the vehicles from the .50 caliber ammunition that was used by Iraqi forces. At the time of our review, only 26 of 213 AAVs at Camp Lejeune had been provided the enhanced armor kits. Marine Corps officials at Camp Lejeune believed the lack of kits was due to insufficient funding. The LAV-C2 variant is a mobile command station providing field commanders with the communication resources to command and control Light Armored Reconnaissance (LAR) units. It is an all-terrain, all-weather vehicle with night capabilities and can be made fully amphibious within three minutes. There are 50 vehicles in the inventory with an average age of 14 years. We assessed the condition of the LAV-C2 as green because the Marine Corps has initiated a fleet-wide Service Life Extension Program (SLEP) to extend the service life of the vehicle from 20 years to 27 years. The LAV-C2 SLEP includes enhancements to communications capabilities. Marine Corps officials cautioned that any delays in SLEP could affect future readiness. While we assessed the condition as green, we noted the operational readiness rate for the command and control variant was 90.5 percent, below the 100 percent goal but higher than the operational readiness rate of 85 percent for the entire fleet. We assessed the program strategy for the LAV-C2 as green because the Marine Corps has developed a long-term program strategy for upgrading the LAV-C2 inventory. The program funded in the current FYDP will enhance communications capabilities and power systems and may afford commonality with Unit Operation Center and helicopter systems. The Marines Corps intend for the upgraded LAV-C2 to provide a prototype to establish baseline requirements for future capabilities and a successor acquisition strategy. Marine Corps officials stated the C2 upgrade program needs to be supported at all levels. We assessed the funding for the LAV-C2 as green because current and projected funding is consistent with Marine Corps stated requirements to upgrade the LAV-C2 inventory. Marine Corps officials have requested $72.2 million in the current FYDP to support major LAV-C2 technology upgrades. Marine Corps officials at Pacific Command recommended increased funding for procurement of additional vehicles, citing the current inventory deficiency as critical. While we did not have sufficient data to definitively assess the wartime capability of the LAV-C2, Marine Corps officials indicated that it has successfully fulfilled its recent wartime missions. Marine Corps reports regarding the operations in Afghanistan cited LAVs in general as the most capable and dependable mobility platform despite the fact that the number of available C-17 transport aircraft limited the deployment of the vehicles. Initial reports from Operation Iraqi Freedom also indicate that the LAV-C2 performed successfully. The Maverick missile is a precision-guided, air-to-ground missile configured primarily for the anti-tank and anti-ship roles. It is launched from a variety of fixed-wing aircraft and helicopters and there are laser and infrared-guided variants. The Maverick missile was first fielded in 1985. We assessed the condition of the Maverick missile as not applicable because the Marine Corps does not track readiness data such as mission capable or operational readiness rates for munitions as they do for aircraft or other equipment. We assessed the program strategy for the Maverick missile as green because the Marine Corps has developed a long-term program strategy for replacing the Maverick missile with more capable missiles. Maverick missile procurement ended in 1992 and the infrared variant will no longer be used in 2003. According to Marine Forces Pacific Command officials, a joint common missile is being developed and scheduled for initial operational capability in 2008. The new missile will be a successor to the Maverick, Hellfire, and TOW missiles. Marine Corps officials stated a joint reactive precision-guided munition for both fixed- and rotary-winged aircraft as a potential successor to Maverick and Hellfire missiles will be submitted to the Joint Requirements Oversight Committee for evaluation in fiscal year 2003. We assessed the funding for the Maverick missile as green because current and projected funding is consistent with the Marine Corps’ stated requirements to replace the Maverick missile inventory. Since fiscal year 1998, the Marine Corps limited funding for the Maverick to the operation and maintenance accounts. While we did not have sufficient data to definitively assess the wartime capability of the Maverick missile, Marine Corps officials indicated that it has successfully fulfilled its recent wartime missions but with some limitations. Specifically, these officials stated that the Maverick missile lacks an all-weather capability. Marine Corps officials cited increased risks due to sensor limitations of the laser variant that restricts the missile’s use to low threat environments. Although the Maverick fulfilled its wartime mission during Operation Iraqi Freedom, Marine Corps officials stressed that its success was due to the fact that this was the optimal environment for the Maverick—desert environment and a lack of low cloud cover. In any other type of environment, however, the Maverick’s use is limited. In addition to the individual named above, Richard Payne, Donna Rogers, Jim Mahaffey, Patricia Albritton, Tracy Whitaker, Leslie Harmonson, John Beauchamp, Warren Lowman, Ricardo Marquez, Jason Venner, Stanley Kostyla, Susan Woodward, and Jane Lusby made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | GAO was asked to assess the condition of key equipment items and to determine if the services have adequate plans for sustaining, modernizing, or replacing them. To address these questions, we selected 25 major equipment items, and determined (1) their current condition, (2) whether the services have mapped out a program strategy for these items, (3) whether current and projected funding is consistent with these strategies, and (4) whether these equipment items are capable of fulfilling their wartime missions. Many of our assessments of 25 judgmentally selected critical equipment items indicated that the problems or issues we identified were not severe enough to warrant action by the Department of Defense, military services, and/or the Congress within the next 5 years. The condition of the items we reviewed varies widely from very poor for some of the older equipment items like the Marine Corps CH-46E Sea Knight Helicopter to very good for some of the newer equipment items like the Army Stryker vehicle. The problems we identified were largely due to (1) maintenance problems caused by equipment age and a lack of trained and experienced technicians, and (2) spare parts shortages. Although the services have mapped out program strategies for sustaining, modernizing, or replacing most of the equipment items we reviewed, some gaps exist. In some cases, such as the KC-135 Stratotanker and the Tomahawk missile, the services have not fully developed or validated their plans for the sustainment, modernization, or replacement of the items. In other cases, the services' program strategies for sustaining the equipment are hampered by problems or delays in the fielding of replacement equipment or in the vulnerability of the programs to budget cuts. For 15 of the 25 equipment items we reviewed, there appears to be a disconnect between the funding requested by the Department of Defense or projected in the Future Years Defense Program and the services' program strategies to sustain or replace the equipment items. For example, we identified fiscal year 2003 unfunded requirements, as reported by the services, totaling $372.9 million for four major aircraft--the CH-47D helicopter, F-16 fighter aircraft, C-5 transport aircraft, and CH-46E transport helicopter. The 25 equipment items we reviewed appear to be capable of fulfilling their wartime missions. While we were unable to obtain sufficient data to definitively assess wartime capability because of ongoing operations in Iraq, the services, in general, will always ensure equipment is ready to go to war, often through surging their maintenance operations and overcoming other obstacles. Some of the equipment items we reviewed, however, have capability deficiencies that could degrade their wartime performance in the near term. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Although not specifically required to do so by statute, FRB considers the fair lending compliance of the entities under the holding companies involved in the merger and any substantive public comments about such compliance. FRB must act on a merger request within 90 days of receiving a complete application or the transaction will be deemed to have been approved. FRB also seeks comments from appropriate state and federal banking regulatory agencies, which have 30 days to respond. While the application is pending, public comment on the proposed merger is to be solicited through notices in newspapers and the Federal Register. The public is allowed 30 days to provide written comments. FRB is required to consider several factors when reviewing a merger application, including (1) the financial condition and managerial resources of the applicant, (2) the competitive effects of the merger, and (3) the convenience and needs of the community to be served. Fair lending oversight and enforcement responsibilities for entities within a bank holding company vary according to entity type (see fig. 1). Federal banking regulators are responsible for performing regularly scheduled examinations of insured depository institutions and their subsidiaries to assess compliance with fair lending laws. In contrast, nonbank subsidiaries of bank holding companies are not subject to regularly scheduled compliance examinations by any agency. However, the fair lending laws provide primary enforcement authority over nonbank mortgage subsidiaries to HUD and FTC. HUD has enforcement authority with respect to FHAct violations for all institutions, and FTC has ECOA enforcement responsibility with respect to all lenders that are not under the supervision of another federal agency. For example, FTC is responsible for the enforcement of ECOA with respect to nonbank mortgage subsidiaries of bank holding companies. FRB has general legal authority under the Bank Holding Company Act and other statutes to examine nonbank mortgage subsidiaries of bank holding companies. Appendix III contains information regarding the extent of mortgage lending performed by banks, thrifts, and independent mortgage companies, another major component of the mortgage lending market, which are not addressed in this study. It also provides data specific to the banking sector. Federal banking regulatory agencies are authorized under ECOA to use their full range of enforcement authority to address discriminatory lending practices by financial institutions under their jurisdictions. This includes the authority to seek prospective and retrospective relief and to impose civil money penalties. HUD, on the other hand, has enforcement authority with respect to FHAct violations for all institutions and HMDA compliance responsibilities for independent mortgage companies. Both ECOA and FHAct provide for civil suits by DOJ and private parties. Whenever the banking regulatory agencies or HUD have reason to believe that an institution has engaged in a “pattern or practice” of illegal discrimination, they must refer these cases to DOJ for possible civil action. Such cases include repeated, regular, or institutionalized discriminatory practices. Other types of cases also may be referred to DOJ. From 1996 through 1998, DOJ entered into four settlements and one consent decree involving fair lending compliance. In the same period, FTC entered into three consent decrees and issued one complaint that were based at least in part on ECOA compliance issues. FRB and OCC, respectively, took two and nine enforcement actions against regulated institutions for violations of the fair lending laws and regulations in this same time period. During this time period FRB, OCC, and FTC also conducted various investigations of consumer complaints they received regarding alleged fair lending violations by institutions under their jurisdiction. For example, FRB conducted 32 investigations of consumer complaints it received in 1998 that alleged fair lending violations by state member banks. HUD can investigate fair lending complaints against various types of institutions, including bank holding companies, national banks, finance companies, mortgage companies, thrifts, real estate companies, and others. In processing fair lending complaints, HUD is to conduct an investigation and, if evidence suggests a violation of the law, issue a charge. HUD is required by law to attempt to conciliate such cases. From 1996 through 1998, HUD entered into 296 conciliation agreements. Of the 296, at least 108 involved banks, mortgage companies, or other entities related to bank holding companies. If conciliation is not achieved, HUD may pursue the case before an Administrative Law Judge. However, a complainant, respondent, or aggrieved person may elect to have the claims asserted in a federal district court instead of a hearing by an Administrative Law Judge. The Secretary of HUD may review any order issued by the Administrative Law Judge. Decisions of the Administrative Law Judge may be appealed to the federal court of appeals. Regulatory enforcement of ECOA and FHAct, enacted in 1974 and 1968, respectively, is supported by the HMDA. As amended in 1989, HMDA requires lenders to collect and report data annually on the race, gender, and income characteristics of mortgage applicants and borrowers. Lenders who meet minimum reporting requirements submit HMDA data to their primary banking regulator or HUD in the case of independent mortgage companies. HMDA data are then processed and made available to the public through the reporting lenders, the Federal Financial Institutions Examination Council, and other sources. Such information is intended to be useful for identifying possible discriminatory lending patterns. As we noted in our 1996 report on fair lending, federal agencies with fair lending enforcement responsibilities face a difficult and time-consuming task in the detection of lending discrimination. Statistical analysis of loan data used by some federal agencies can aid in the search for possible discriminatory lending patterns or practices, but these methods have various limitations. For example, these statistical models cannot be used to detect illegal prescreening or other forms of discrimination that occur prior to the submission of an application. For these forms of discrimination, consumer complaints may be the best indicator of potential problems. We noted in the report that it is critical that the agencies continue to research and develop better detection methodologies in order to increase the likelihood of detecting illegal practices. In addition, we encouraged the agencies’ efforts to broaden their knowledge and understanding of the credit search and lending processes in general because such knowledge is prerequisite to improving detection and prevention of discriminatory lending practices. regardless of asset size, if they originated 100 or more home purchase loans (including refinancings) during the calendar year. Depository institutions are exempt from reporting HMDA data if they made no first-lien home purchase loans (including refinancings of home purchase loans) on one-to-four family dwellings in the preceding calendar year. Nondepository institutions are exempt if their home purchase loan originations (including refinancing of home purchase loans) in the preceding calendar year came to less than 10 percent of all their total loan originations (measured in dollars). NBD’s acquisition of First Chicago in 1995, Fleet’s acquisition of Shawmut in 1995, Chemical’s acquisition of Chase in 1996, NationsBank’s acquisition of Boatmen’s in 1997, NationsBank’s acquisition of BankAmerica in 1998, and BancOne’s acquisition of First Chicago NBD in 1998. To verify the completeness of FRB’s summaries of the comment letters, we developed a data collection instrument, reviewed a sample of comment letters submitted for two of the mergers, and compared our data with the FRB summaries. From our sampling of comment letters, we determined that FRB’s internal summaries of the comment letters were accurate and that we could rely upon the other FRB summaries as accurate reflections of the public comments submitted. To assess FRB’s consideration of the types of fair lending issues raised during the merger process for large bank holding companies, we reviewed FRB’s internal memorandums and supporting documentation for the six selected mergers and FRB’s orders approving the mergers in question. We also interviewed FRB staff involved in assessing the comments made by consumer and community groups for the six selected mergers. In addition, we obtained and analyzed fair lending enforcement actions taken by FRB, OCC, DOJ, FTC, and HUD to determine if they involved institutions that were part of the six selected mergers. We also conducted interviews with representatives of these agencies to discuss coordination policies and procedures related to the merger process for these large bank holding companies. We held discussions with representatives of the four bank holding companies that resulted from the six mergers, representatives of bank industry trade groups, and various consumer and community groups that commented on the six mergers to obtain their views regarding the federal regulatory response to fair lending issues raised during the merger process. We conducted our review from November 1998 to July 1999, in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from FRB, OCC, FTC, DOJ, and HUD. FRB, OCC, and HUD provided written comments that are included in appendixes IV through VI. A summary of the agencies’ comments and our responses are presented at the end of this letter. Consumer and community groups submitted comment letters raising fair lending issues in each of the six mergers. The number of comment letters that FRB received on the mergers—which included letters supporting or opposing the merger—ranged from 17 to approximately 1,650. Table 1 lists the primary fair lending issues raised and the number of mergers in which each issue was raised. As shown in table 1, consumer and community groups raised the issue of perceived high denial and low lending rates to minorities in all six cases. The groups typically based these concerns on their analysis of HMDA data. For example, one of the community groups commenting on a proposed merger cited denial rates for minorities that were twice the rate for Whites in a particular geographic area. In other cases, consumer and community groups cited HMDA data indicating that the number of loans made to minority groups by the institutions involved in the merger was not consistent with the demographics of a particular market. The groups claimed that the HMDA data provided evidence of a disparate impact in lending to minorities. The consumer and community groups were most often concerned about the lending record of the subsidiaries of the holding company that was the acquirer. However, a number of these groups raised issues with the lending records of both holding companies involved in the proposed merger. In a few cases, the lending record of the subsidiaries of the holding company that was to be acquired was identified as an issue. The consumer and community groups often did not identify the specific institution under the holding company in question but, instead, focused on the overall lending in specific geographic markets. Consumer and community groups raised fair lending concerns in five of the six mergers regarding the activities of nonbank mortgage subsidiaries. In four of the mergers, the concerns involved the nonbank mortgage subsidiaries of the holding companies. Nonbank mortgage subsidiaries of holding companies accounted for approximately one-fifth of the total mortgage lending of the bank sector, and they experienced steady growth in both the number and dollar value of mortgage loans originated from 1995 through 1997. Their growth in lending activity out-paced other bank sector entities in 1997. (See app. III, figs. III.2 to III.5.) The nonbank mortgage company in the fifth merger was a subsidiary of one of the lead banks involved in the merger. In five merger cases, consumer and community groups cited abusive or what they characterized as “predatory” sub-prime lending as a fair lending issue. Sub-prime lending itself is not illegal and is generally acknowledged as a means of widening consumer access to credit markets. However, as stated in a recent interagency document, the “higher fees and interest rates combined with compensation incentives can foster predatory pricing or discriminatory steering of borrowers to sub-prime products for reasons other than the borrower’s underlying creditworthiness.” The alleged abusive sub-prime lending activities cited by the consumer and community groups included such practices as undisclosed fees and aggressive collection practices that were more likely to affect the elderly, minorities, and low- to moderate-income individuals. Other concerns identified with sub-prime lending included the alleged targeting of minorities for the higher priced sub-prime loans even if they would qualify for loans at lower rates. The groups typically relied on anecdotal rather than statistical evidence to support their concerns. HMDA data cannot be used to analyze sub-prime lending because HMDA does not require lenders to identify which loans are sub-prime or report loan characteristics that can be used to identify sub-prime lending, such as the pricing and fees, and does not require the reporting of borrowers’ credit information. In three of the merger cases, consumer and community groups alleged that minorities were being directed or steered disproportionately to the holding company lender that offered the highest-priced loans or the least amount of service. In two of the mergers, the allegations focused on steering between the banks and the holding companies’ nonbank mortgage companies engaged in sub-prime lending. The steering issue raised in the third merger involved referral practices between a bank and its subsidiaries that allegedly resulted in minorities typically receiving a lower level of service. One of the consumer and community groups alleged that a holding company established the nonbank mortgage company as a bank holding company subsidiary rather than as a bank subsidiary to escape regulatory scrutiny. As noted earlier, nonbank subsidiaries of bank holding companies are not subject to regularly scheduled compliance examinations. The group stated that this created a “regulatory blindspot.” Consumer and community groups raised prescreening and marketing issues in four mergers. In two of the four, the consumer and community groups were concerned about prescreening of applicants that resulted in the referral of only those applicants deemed qualified. The groups alleged that the prescreening programs violated the ECOA provision that requires lenders to provide applicants with written notification of a loan application denial stating the reason or basis for the denial. The community groups also raised issues with bank fee or marketing practices. According to these groups, some practices were intended to discourage minorities from applying for credit, and other practices disproportionately targeted minorities for loans with higher interest rates. In two of the merger cases, consumer and community groups raised issues related to lending to small businesses owned by minorities or located in minority communities. The primary support for these issues appeared to be analysis of HMDA data and Community Reinvestment Act (CRA) data.The consumer and community groups alleged that the holding companies involved in the two mergers were discriminating against or providing an inadequate level of funding to minority-owned small businesses or small businesses located in minority communities. Concerns about the discriminatory treatment of minority applicants were raised in two of the mergers. The basis for the complaint on one merger was the results of an independent testing program that used matched-pair testing. According to the complainant, Black applicants were kept waiting longer, were quoted higher closing costs and overall processing times, and overall were discouraged from applying for credit in comparison to White applicants. In another merger, FRB received several comment letters that objected to the acquiring bank holding company’s customer call center’s handling of fair lending complaints. Specifically, they asserted that the center’s staff did not inform callers of their right to file a complaint and lacked expertise in fair lending and investigative techniques. Redlining of predominantly minority neighborhoods was alleged in one merger. A consumer/community group said that the acquiring bank holding company had redlined many of the low- and moderate-income, predominantly minority communities in a particular city. The group based its allegation on the lack of bank branches and minimal marketing of credit products in those communities. FRB analyzed HMDA data to help assess the validity of the fair lending concerns raised by the groups. FRB also obtained and reviewed additional information from the bank holding companies involved in the proposed merger. FRB staff stated that in assessing fair lending concerns, they relied primarily on current and past fair lending compliance examinations performed by the primary banking regulator(s). In each of the six mergers, FRB staff obtained and reviewed additional information provided by the bank holding companies to assess the fair lending issues raised by consumer and community groups. According to FRB officials, they forwarded the comments received from the consumer and community groups during the public comment period to the bank holding companies involved in the mergers. They explained that the bank holding companies were encouraged, but not required, to provide information or a response to the issues raised in the comment letters. In addition, FRB sometimes requested specific information from the bank holding companies in response to issues raised by the consumer and community groups. For example, FRB staff requested and assessed information from one holding company about the settlement of lawsuits involving consumer complaints. This request was made in response to a group’s concerns about the compliance of a nonbank mortgage subsidiary with fair lending and consumer protection laws. In response to consumer and community groups’ concerns about overall lending to minorities by the entities involved in the proposed holding company mergers, FRB staff obtained and analyzed HMDA data. Using these data, FRB compared the lending performance of the bank holding company subsidiary in question to the performance of other lenders in the aggregate for a particular community or geographic area. In addition, they looked at the holding company’s record of lending to minorities over the last several years to determine if there were any discernible patterns that could indicate discriminatory lending. In conducting their analysis, FRB staff identified lending rate disparities in some areas/markets that indicated that the holding company subsidiary was lagging behind the aggregate or not doing as well as could be expected. However, FRB staff noted that although HMDA data may indicate a need for further analysis or targeted reviews through examinations, HMDA data alone cannot provide conclusive evidence of illegal discrimination because of known limitations in the HMDA data. Bank regulators, bank officials we contacted, and some academics and community group representatives agreed that HMDA data are limited in their potential to demonstrate discrimination. Principal among the limitations associated with HMDA data is the lack of information on important variables used in the credit underwriting process. For example, HMDA data do not include information on the creditworthiness of the applicant, the appraised value of the home, or the credit terms of the loan. This information typically is maintained only in the lender’s loan files and is accessible to regulators conducting compliance examinations or investigations. FRB staff stated that they relied heavily on the primary regulator’s compliance examinations because on-site comprehensive reviews of actual bank practices and records are the best means to assess compliance with the fair lending laws. Moreover, time, access, and authority constraints limit the analysis of fair lending issues that FRB staff can perform during the application process for bank holding company mergers. FRB officials stated that the merger application review process is not a substitute for the fair lending examination process. Therefore, FRB relied on the past and current fair lending examination results of the primary banking regulator. In response to the fair lending concerns raised by the consumer and community groups, FRB staff said they obtained information on the scope of and conclusions reached on prior and on-going fair lending compliance examinations performed by the primary banking regulator. The age of the examinations relied on by FRB ranged from over 3 years old to having been recently completed or still on-going. These examinations covered the fair lending compliance of the banks and their subsidiaries with the fair lending laws and regulations. The fair lending examination reports typically did not address all of the fair lending issues raised by the consumer and community groups during the merger process, such as abusive sub-prime lending, discriminatory prescreening/marketing, and steering. Moreover, nonbank mortgage subsidiaries of bank holding companies are not routinely examined for fair lending compliance by any federal regulatory or enforcement agency. On a case-by-case basis, FRB officials told us they have exercised their general authority granted under the Bank Holding Company Act and other statutes to conduct fair lending compliance investigations of a bank holding company’s nonbank mortgage subsidiaries. In two cases, FRB had conducted prior investigations of nonbank mortgage subsidiaries involved in proposed mergers we studied. According to FRB officials, a long-standing FRB policy of not routinely conducting consumer compliance examinations of nonbank subsidiaries was formally adopted in January 1998. The policy is based on three primary considerations. First, ECOA and other major laws enforced under FRB’s compliance program give primary enforcement responsibility for nonbank subsidiaries of bank holding companies to FTC. Second, routine examinations of the nonbank subsidiaries would be costly. Third, such examinations would, in the FRB officials’ opinion, raise questions about “evenhandedness” given that similar entities, such as independent mortgage companies, that are not part of bank holding companies would not be subjected to examinations. FRB does not have specific criteria as to when it will conduct on-site investigations of these nonbank mortgage subsidiaries. According to FRB, on-site inspections of a holding company nonbank mortgage subsidiary are conducted when factors present suggest that discriminatory practices are occurring and when it seems appropriate to do so because the matter may relate to relevant managerial factors. In contrast, FRB’s policy is to conduct full, on-site examinations of the subsidiaries of the banks it regulates. Banks still account for a greater amount of lending than the other bank sector entities—bank subsidiaries and nonbank mortgage subsidiaries of holding companies. However, the growth in lending by nonbank mortgage subsidiaries has steadily increased since 1995 and outpaced other bank sector entities in 1997 (see app. III). In discussions with FTC officials, we confirmed that they do not examine or routinely investigate nonbank mortgage subsidiaries of holding companies. They emphasized that FTC is a law enforcement agency, not a regulator. FTC, they said, does not conduct compliance examinations but does investigations targeted at specific entities, most of which are agency- initiated. However, investigations can result from consumer complaints that indicate a pattern or practice or public interest problem to be explored. The officials noted that FTC’s jurisdiction is broad—generally covering any lending entity that is not a bank, thrift, or their holding companies—but FTC resources are limited. They said FTC’s current ECOA enforcement efforts have focused on independent mortgage or finance companies and discriminatory pricing issues. During the period of the six mergers that we reviewed, 1996 through 1998, FTC achieved three settlements and issued one complaint in ECOA enforcement actions; none involved bank holding company entities. In all six mergers, FRB noted that the primary banking regulator had found no evidence of illegal credit discrimination in its most recent fair lending compliance examinations. Of the two prior FRB investigations of nonbank mortgage subsidiaries, FRB found no evidence of illegal discrimination in one case. As discussed further in the next section, FRB made a referral to DOJ on the other case on the basis of the nonbank mortgage subsidiary’s use of discretionary loan pricing practices that resulted in disparate treatment based on race. FRB approved all six of the mergers, but one was approved with a condition related to a fair lending compliance issue. At the time of the merger application in question, DOJ was pursuing an investigation—on the basis of a FRB referral—of the holding company’s nonbank mortgage subsidiary. The focus of the investigation was on the nonbank mortgage subsidiary’s use of discretionary loan pricing—known as overaging— which allegedly resulted in minorities disproportionately paying higher loan prices than nonminorities. The nonbank mortgage subsidiary was under a commitment with FRB not to engage in overage practices. FRB approved the merger with the condition that the holding company not resume the overage practice without FRB’s approval. DOJ subsequently entered into a settlement agreement with the nonbank mortgage subsidiary in which it agreed to change its overage policies and pay $4 million into a settlement fund. In our review of the six merger cases, we found weaknesses in some of FRB’s practices that could limit the access of various government agencies to information about the fair lending compliance performance of bank holding company entities. Two weaknesses could limit FRB’s access to such information during consideration of bank holding company merger applications. Specifically, FRB did not routinely contact FTC or HUD to obtain information about any fair lending complaints or concerns related to the entities involved in the mergers. Moreover, FRB did not ensure that information about the structural organization of the bank holding companies was available to the public or DOJ, which could have limited the information provided to FRB by these sources. A third weakness could limit the access of other agencies with fair lending compliance responsibilities to information FRB obtained during consideration of merger applications. Specifically, FRB did not routinely provide the primary banking regulators, FTC, and HUD with the comment letters it received during the merger applications process regarding the fair lending compliance of the banks and nonbank mortgage subsidiaries of the holding companies involved in the six mergers. As discussed previously, the enforcement of fair lending laws is shared by a number of federal agencies. For example, there are four agencies (FRB, FTC, HUD, and DOJ) that have roles in fair lending enforcement with regard to nonbank mortgage subsidiaries of bank holding companies. Federal agencies involved in fair lending oversight and enforcement— including FRB, FTC, HUD, and DOJ and other federal banking regulators— recognize the need for effective coordination in their Interagency Policy Statement on Discrimination in Lending. This policy states that they will seek to coordinate their actions to ensure that each agency’s action is consistent and complementary. In keeping with the spirit of this policy, FRB routinely solicited input from the primary federal regulator for the banking subsidiaries of the holding companies involved in the merger. In addition, FRB and DOJ staff told us that they coordinated informally with each other during the merger application process regarding the fair lending compliance of the holding company subsidiaries involved in the mergers. However, FRB did not typically contact FTC or HUD to determine if they had ongoing investigations involving any of the bank holding company subsidiaries or other data, including consumer complaints, that could be useful in assessing the fair lending concerns raised by consumer and community groups during the merger process. In the five merger cases in which fair lending concerns about the nonbank mortgage subsidiaries were raised, FRB contacted FTC with regard to only one of the merger applications; FRB did not contact HUD in any of the cases. Without coordination with FTC and HUD, FRB cannot ensure that it has access to all relevant information about fair lending issues that may arise in its consideration of bank holding company merger applications. In three of the six merger cases, HUD had fair lending complaint investigations in process at the same time that FRB was considering the merger applications. There was one merger in which HUD had three ongoing investigations arising out of consumer complaints (complaint investigations) at the time of the merger application. For example, one of the cases that HUD was investigating during a merger involved alleged discrimination at the preapplication interview, such as minority applicants receiving less information about the bank’s mortgage products and being quoted less favorable terms than similarly qualified White applicants. All six of the complaint investigations that were in process at the time of the mergers were the result of complaints by individuals. In five of the six cases, HUD entered into conciliation agreements that involved monetary payments to the complainants ranging from $350 to $46,000. In soliciting input on the proposed merger, FRB did not provide or direct federal enforcement agencies or the public to structural information about bank holding companies that would identify an affiliated bank and nonbank lenders involved in the merger. As a result, federal enforcement agencies and the public may not have been able to provide all relevant information. For this reason, FRB may not have had current and complete fair lending information on bank holding companies to properly assess the fair lending activities of these companies during the merger application process. Ensuring knowledge of and access to structural information on bank holding companies, including the names and addresses of bank and nonbank lenders under the applicant, could enable the enforcement agencies to better complement FRB’s efforts to assess the fair lending activities of bank holding company entities for the merger application process. A HUD official we interviewed stated that without information from FRB regarding the structural organization of a bank holding company, HUD may not be able to identify the entities within the holding company structure that were subject to ongoing or past complaint investigations. Officials from DOJ and FTC also indicated the need for such information. Access to information about the structural organization of the holding companies involved in proposed mergers could also help improve the quality of public comments that FRB receives during the merger process. FRB staff stated that the comments that they receive from consumer and community groups often exhibit a lack of understanding of the often complex structural organization of the holding companies involved in a proposed merger—particularly as it relates to mortgage lending activity. Outlines of the hierarchical structure of bank holding companies have been available since January 1997 through the FRB’s National Information Center (NIC) on the Internet. However, not all the government agencies and consumer and community groups may be aware of the NIC source or have access to it. In addition, the structural information provided by NIC could be viewed as somewhat overwhelming and, in that sense, difficult to use. As noted on the NIC Web site itself, the information for large institutions “can be quite lengthy and complex.” The structural information on the NIC Web site is also limited in that geographical information is provided for some, but not all, lenders within holding companies. Although the site offers the names and addresses of banking institutions’ branch offices, it does not offer such information for nonbank lenders within a holding company. To determine the affiliation of a local lender’s branch office, consumers are likely to find names and addresses necessary— especially in light of the many consolidations that are occurring in today’s financial marketplace and the similarities that can exist in lenders’ names. Because the enforcement of fair lending laws is shared by a number of federal agencies and fair lending problems may involve the interaction of entities overseen by differing federal agencies, coordinated information- sharing among the agencies can contribute to effective federal oversight. FRB staff told us they do not typically forward the fair lending-related comment letters received during the merger process to the appropriate primary banking regulator, FTC, or HUD for consideration in subsequent fair lending oversight activities. FRB staff stated that they do refer some of the fair lending-related comment letters if they identify problems or practices that give rise to supervisory concerns. They explained that their internal policies and, in the case of HUD, a Memorandum of Agreement between HUD and the banking regulators require FRB to forward consumer complaints by individuals to the appropriate federal agency. However, FRB staff stated that comment letters that raised general fair lending issues regarding lending patterns or policies would not have been routinely forwarded to other agencies. For example, FTC did not receive the comment letters from consumer and community groups that raised fair lending issues with the nonbank mortgage subsidiaries of the holding companies involved in four of the mergers. We believe that by forwarding the fair lending-related comment letters, FRB will provide the other agencies the opportunity to detect problems that arise from the interactions of entities under the holding company structure that may otherwise go undetected. The historical division of fair lending oversight responsibility and enforcement authority presents challenges and opportunities to agencies that have jurisdiction over the entities in large bank holding companies. Although large bank holding companies typically include entities overseen by different federal regulators, some types of fair lending abuses could involve operating relationships between such entities. An adequate federal awareness during the merger application process of fair lending compliance performance and federal response to any alleged fair lending abuses may well depend upon effective information-sharing among the various agencies and the ready availability to these agencies and the public of information identifying lenders under the holding company. Although the merger application process is not intended to substitute for fair lending examination or enforcement processes of individual agencies, it presents an opportunity to enhance the effectiveness of those processes. To take advantage of this opportunity, the FRB’s merger application process for large bank holding companies should provide that relevant information, including consumer complaints or consumer complaint data, be obtained from all agencies with responsibility for compliance with fair lending laws. Further, the process should ensure that this information, as well as comment letters received from consumer and community groups, is shared among those agencies to assist in their continuing efforts to identify and oversee developments in mortgage lending that can affect lender compliance with fair lending laws. FRB, as regulator of bank holding companies, is uniquely situated to monitor developments in operating relationships among holding company entities that could affect fair lending. Its role could be especially valuable in monitoring the lending activity of nonbank mortgage subsidiaries. The FRB policy of not routinely examining nonbank mortgage subsidiaries for fair lending compliance and the FTC role as an enforcement agency rather than a regulator result in a lack of regulatory oversight of the fair lending performance of nonbank mortgage subsidiaries whose growth in lending out-paced other bank sector entities in 1997. To enhance the consideration of fair lending issues during the bank holding company merger approval process, we recommend that the Board of Governors of the Federal Reserve System develop a policy statement and procedures to help ensure that all parties asked to provide information or views about the fair lending performance of entities within the bank holding companies are given or directed to sources for structural information about the holding companies, and all federal agencies responsible for helping to ensure the fair lending compliance of entities involved in the proposed merger are asked for consumer complaints and any other available data bearing on the fair lending performance of those entities. To aid in ongoing federal oversight efforts, we recommend that FRB develop a policy and procedures to ensure that it provides federal agencies relevant comment letters and any other information arising from the merger application process that pertains to lenders for which they have fair lending enforcement authority. For example, the other agencies may be interested in receiving FRB’s HMDA analysis as well as the other data obtained and analyzed by FRB in response to the fair lending allegations raised in the comment letters. In addition, we recommend that FRB monitor the lending activity of nonbank mortgage subsidiaries and consider examining these entities if patterns in lending performance, growth, or operating relationships with other holding company entities indicate the need to do so. We requested comments on a draft of this report from the Chairman of the Federal Reserve Board, the Comptroller of the Currency, the Secretary of Housing and Urban Development, the General Counsel of the Federal Trade Commission, and the Assistant Attorney General for Administration of the Department of Justice. Each agency provided technical comments, which we incorporated into the report where appropriate. In addition, we received other written comments from FRB, OCC, and HUD; these are reprinted in appendixes IV through VI of this report. With respect to the draft report’s recommendations, FRB sought clarification regarding the first recommendation, generally agreed with the next two recommendations, and disagreed with the last recommendation. OCC and HUD did not disagree with our recommendations and expressed their support for efficient and effective enforcement of the fair lending laws. Further, HUD suggested that a more formal arrangement be created for obtaining and considering agency input during FRB’s merger approval process. FRB sought clarification of our intent in the first recommendation—that, when soliciting comments on proposed bank holding company mergers, FRB provide structural information about those holding companies. FRB said that information about holding company structure is available to the public and federal agencies on the Internet at the Federal Reserve’s National Information Center (NIC) site and, upon request, from the Board and the Reserve Banks. FRB also said that the information is often in the application filed by the applicant bank holding company, for those who elect to review the application in full; and the information is widely available from publications and from other federal agencies. We added information to the text to clarify our intent. Our intent in recommending that FRB provide the structural information or a source or sources of such information is to enhance consideration of fair lending issues during the merger approval process. We believe that the provision of structural information, including names and addresses of branch offices of lenders, or directions about how to obtain that information, can help ensure that FRB receives from interested parties timely and complete fair lending information on lenders involved in the merger. Without being able to identify the bank and nonbank lenders in the holding companies involved in a merger, interested parties could be unable to determine if lenders whose actions have raised fair lending concerns are affiliated with those holding companies. We do not disagree that this information is sometimes available from a variety of sources. However, ready public access to that information depends upon public awareness of the availability of the information. We note that none of the Federal Register notices requesting public comment on bank holding company mergers in our sample that occurred after 1997, when NIC was created, mentioned the NIC Internet site or any other source of information about the structure of the applicant bank holding companies. Responding to the report’s statement that information provided on NIC can be quite lengthy and complex, FRB said that it believed the complexity is largely a reflection and a function of the size and scope of these large organizations. FRB also said it was not clear just how the information could be made simpler for the public. We agree that the complexity of the information about the largest bank holding companies on NIC is a function of the size and scope of these organizations. However, we also believe that the information could be narrowed, and in that way simplified, by a mechanism that could help interested parties focus on the relevant details of the holding company’s structure. A variety of entities are often affiliated with large holding companies, including, for example, investment, leasing, and real estate development companies. A NIC search mechanism to narrow the structural information to bank and nonbank lenders affiliated with a holding company would aid federal agencies and consumer organizations that may need such information to collect or sort through fair lending concerns about such institutions from field offices or member organizations nationwide. More focused information, including names and addresses of branch offices, would also benefit consumers attempting to determine the affiliation of a local lender’s office. As mentioned in the report, NIC provides a mechanism for obtaining lists of the names and addresses of banking institutions’ branch offices; however, it does not provide the addresses of nonbank lenders’ branch offices or list such branch offices. We believe that this is an important weakness in NIC as a tool to be used in the merger application process by agencies, consumer groups, and individuals, considering the prevalence of concerns about nonbanks’ fair lending performance in the merger cases we analyzed. FRB said that persons generally start out with the identity of the organization about which they have concerns, and it should be relatively simple to confirm whether that organization is affiliated with an applicant bank holding company. We agree that persons would generally use NIC to determine if an identified organization is affiliated with an applicant bank holding company. However, the ease of determining this through NIC could vary, depending upon whether the organization of concern is a banking institution or a nonbank subsidiary of the holding company. As of October 10, 1999, NIC users could determine the holding company affiliation of a banking institution (but not a nonbank holding company subsidiary) by entering the legal name of a banking institution (or even part of that name) and the city and state in which the institution is located. NIC also offered a function enabling users to obtain a listing of addresses of all branch offices of banking institutions (but not addresses of franchises or branch offices of lenders that are nonbank holding company subsidiaries). To confirm a nonbank lenders’ affiliation with an applicant bank holding company, the interested party’s only option is to search for the nonbank lender’s legal name while reading through the multipage listings of entities that describe the entire hierarchial structure, starting with the parent holding company. The only geographical information provided for a nonbank holding company subsidiary in the listing is the city and state domicile of the head office—that is, no branch offices or franchises are identified in the listing. Referring to our mention of the absence of geographic information on NIC, FRB notes that a person’s concerns about a particular entity will likely relate to the geographic area in which the person resides, or to which the person has some link. We agree with this statement. We also believe that a person concerned about a particular local lender is likely to need to see the names and addresses of lenders affiliated with holding companies involved in a proposed merger to determine if his concern about the local lender is relevant for FRB’s consideration. With regard to our recommendation for greater information sharing between FRB, the other banking regulators, HUD, and FTC during the merger application process, FRB generally agreed and said it would explore ways to enhance the systematic exchange of relevant information. However, FRB did not agree that it should seek information about other agencies’ consumer complaints as part of the merger application review process. The reasons for this were: A 1992 Memorandum of Understanding between HUD and the banking regulators’ calls for HUD to refer allegations of fair lending violations to the appropriate banking regulator, which is to take these into account in examinations and supervisory activity. HUD cases involving individual or isolated grievances—and not a finding of a pattern or practice—would not likely represent the type of information that is particularly useful in FRB’s review of managerial resources for purposes of the Bank Holding Company Act. Although the 1992 Memorandum of Understanding between HUD and the banking regulators calls for the referral of allegations of fair lending violations to the appropriate banking regulator, it does not address the referral of these fair lending allegations to FRB for consideration during the bank holding company merger application process. The fair lending allegations received by HUD, FTC, and the other banking regulators could be useful to FRB in its consideration of the managerial resources factor during the merger process. We acknowledge that not all consumer complaints received by other agencies would be relevant for FRB to consider during the bank holding company merger process. However, an otherwise unobserved pattern or practice bearing on the managerial resources of a large and complex holding company could emerge from a review of widely collected consumer complaints. Moreover, consumer complaint letters can be a useful indicator of certain types of illegal credit discrimination, such as discriminatory treatment of applicants and illegal prescreening and marketing. FRB stated that the exchange of information between agencies should (1) ensure that the information is provided in a timely manner and (2) maximize the benefits of the exchange while minimizing the burden to all parties. We concur with FRB’s expectations regarding the exchange of information and acknowledge FRB’s initiative in planning to consult with the other federal agencies to identify possible ways to enhance the systematic exchange of relevant information. FRB stated that it planned to take action in response to our recommendation that it provide copies of relevant comment letters received during the merger application process to the other federal agencies involved with fair lending enforcement. Specifically, FRB indicated that it would consult with the other agencies and was prepared to establish whatever mechanism deemed appropriate to ensure that the agencies receive public comments that they would find helpful to ongoing supervisory oversight. FRB’s plans are a positive first step in responding to our recommendation. FRB disagreed with our recommendation as stated in the draft report that it monitor the lending activities of nonbank mortgage subsidiaries and consider reevaluating its policy of not routinely examining these entities if circumstances warranted. FRB stated that it had recently studied this issue at length and concluded that although it had the general legal authority to examine nonbank mortgage subsidiaries of bank holding companies, it lacked the clear enforcement jurisdiction and legal responsibility for engaging in routine examinations. We revised the wording of our recommendation to clarify that we were not necessarily recommending that FRB consider performing routine examinations of nonbank mortgage subsidiaries. We recognize that FTC has the primary fair lending enforcement authority for the fair lending compliance of nonbank mortgage subsidiaries. However, FRB is uniquely situated to monitor the activities of these nonbank mortgage subsidiaries by virtue of its role as the regulator of bank holding companies and its corresponding access to data that are not readily available to the public or other agencies, such as FTC. If patterns in growth, lending performance, or operating relationships with other holding company entities do not change dramatically, then there may be no reason to examine these entities. Monitoring the lending activities of the nonbank mortgage subsidiaries would help FRB determine when it would be beneficial to conduct targeted examinations of specific nonbank mortgage subsidiaries using size, extent of lending in predominately minority communities, involvement in sub-prime lending, or other factors as the basis for selection. In other cases, FRB may determine that the results of its monitoring efforts should be referred to those agencies responsible for enforcement of nonbank mortgage subsidiaries’ compliance with fair lending laws. OCC and HUD did not disagree with our recommendations. OCC stated that it was committed to working with all the agencies that have a role in providing efficient and effective oversight of compliance with fair lending laws. HUD stated that it stands committed to enhancing coordination among federal agencies to achieve fair lending. HUD noted its support for efforts to ensure greater compliance among nondepository lenders with the FHAct and other consumer protection laws. HUD suggested that a memorandum of understanding that would govern interagency coordination during the merger application process might be appropriate. Such a memorandum could be a useful tool to document each agency’s responsibility regarding information sharing and coordination during the merger application process for bank holding companies. As agreed with your offices, we are sending copies of this report to Representative Rick Lazio, Chairman, and Representative Barney Frank, Ranking Minority Member, of the House Subcommittee on Housing and Community Opportunities; Representative James Leach, Chairman, and Representative John LaFalce, Ranking Minority Member, of the House Committee on Banking and Financial Services; and Senator Phil Gramm, Chairman, and Senator Paul Sarbanes, Ranking Minority Member, of the Senate Committee on Banking, Housing, and Urban Affairs. We are also sending copies of the report to the Honorable Alan Greenspan, Chairman, Board of Governors of the Federal Reserve System; the Honorable John D. Hawke, Jr., Comptroller of the Currency; the Honorable Andrew Cuomo, Secretary, Department of Housing and Urban Development; the Honorable Stephen R. Colgate, Assistant Attorney General for Administration, Department of Justice; and the Honorable Deborah A. Valentine, General Counsel, Federal Trade Commission. Copies will also be made available to others on request. If you or your staff have any questions regarding this letter, please contact me or Kay Harris at (202) 512-8678. Key contributors to this report are acknowledged in appendix VII. GAO recommendation Remove the disincentives associated with self-testing. Responsible agency(ies) Federal Reserve Board (FRB) and Department of Housing and Urban Development (HUD) Action taken by agency(ies) Congress enacted legislation in September 1996. FRB and HUD issued implementing regulations in December 1997. FRB, Office of the Comptroller of the Currency (OCC), Federal Deposit Insurance Corporation (FDIC), Office of Thrift Supervision (OTS), and National Credit Union Administration (NCUA) The Federal Financial Institutions Examination Council approved Interagency Fair Lending Examination Procedures in December 1998. Adopt guidelines and procedures for the use of preapplication discrimination testing. Department of Justice (DOJ) DOJ issued updated guidance on pattern and practice of discrimination to the banking regulators and HUD in November 1996. NCUA is in the process of developing guidance to address preapplication testing. In addition to the issues raised by consumer and community groups in the six mergers that we looked at, representatives of the regulatory and enforcement agencies and the bank holding companies we contacted identified various emerging fair lending issues. These issues involved (1) credit scoring, (2) automated loan underwriting, and (3) mortgage brokers. The fair lending concerns associated with these three issues are discussed below. We do not attempt to address all of the various and complex enforcement, compliance, and consumer protection issues associated with each of the three topics. Instead, we highlight some of the fair lending concerns that have been associated with each topic. The Federal Reserve Board (FRB) and the Department of Justice (DOJ) raised the issue of potential discrimination in credit scoring as an emerging fair lending concern. The Office of the Comptroller of the Currency (OCC) expressed the concern that some lenders may view credit scoring as a safe harbor from fair lending issues. This would ignore the possibility that differential treatment may occur in segmenting the applicant population during the development or input of the data, or in judgmental overrides of the credit-scoring system. According to credit reporting companies (credit bureaus), credit scoring is intended to be an objective method for predicting the future credit performance of borrowers. Credit scoring has gained wide usage among lenders who use it to make lending decisions on various types of loans, such as installment; personal finance; bankcard; and, most recently, mortgages. To develop a credit-scoring system, lenders generally use a risk-scoring process that examines consumer credit reports, assigns numerical values to specific pieces of information, puts those values through a series of mathematical calculations, and produces a single number called a risk score or credit score. Lenders generally offer credit to borrowers with the higher scores. The premise is that the higher scores indicate a better likelihood that the borrower will repay the loan. According to FRB, discrimination in credit scoring could be revealed in two ways, either through disparate treatment or disparate impact. Disparate treatment and disparate impact are methods of analyzing whether discrimination exists. The disparate treatment analysis determines whether a borrower is treated less favorably than his/her peers due to race, sex, or other characteristics protected by the Equal Credit Opportunity Act (ECOA) or the Fair Housing Act (FHAct). The disparate impact analysis determines whether a lender’s seemingly neutral lending policy has a disproportionately adverse impact against a protected group, the policy is justified by business necessity, and a less adverse alternative to such policy or practice exists. OCC, DOJ, and the Federal Trade Commission (FTC) agree that fair lending concerns in credit scoring most often arise when lenders ignore the credit score (i.e., override the score) and use subjective judgment to make a lending decision. Fair lending concerns associated with credit scoring were not raised as an issue in any of the six bank holding company mergers in our study. Officials from all four of the bank holding companies we interviewed stated they used credit-scoring systems. However, they indicated that their credit-scoring systems were applied with safeguards designed to ensure compliance with fair lending laws and regulations. From 1990 through 1998, the regulators and enforcement agencies had few cases of discrimination in credit scoring. OCC referred a case in 1995 and another in 1998 to DOJ that dealt with alleged discrimination in credit scoring. An agreement was reached with OCC in the 1995 case, and the 1998 referral resulted in DOJ filing a lawsuit. In this particular case, DOJ alleged that the bank required a higher credit score for Hispanic applicants to be approved for loans/credit. FTC cited one case of credit discrimination in 1994, which resulted in a consent decree. In this case, the lender had used overrides of the credit-scoring system that discriminated against applicants on the basis of marital status. The fair lending issues that were raised regarding credit scoring are closely associated with the issues associated with automated loan underwriting. According to the Federal National Mortgage Association (Fannie Mae), automated loan underwriting is a computer-based method that is intended to enable lenders to process loan applications in a quicker, more efficient, objective, and less costly manner. The lender enters information from the borrower’s application into its own computer system. This information is communicated to an automated loan underwriting system, such as those developed by Fannie Mae and the Federal Home Loan Mortgage Corporation (Freddie Mac). The lender then requests a credit report and credit score from a credit bureau. The automated loan underwriting system then evaluates the credit bureau data and other information to arrive at a recommendation about whether or not the loan meets the criteria for approval. “Currently, there is little known about the effects of automated underwriting systems on low- and moderate-income or minority applicants. Some informants believe these systems may prevent underwriters from taking full advantage of the increased levels of underwriting flexibility allowed by the GSEs . Lower income applicants are more likely to be required to produce documentation supporting their loan application, such as letters explaining past credit problems or statements from employers about expected salary increases. Automated systems may not have the ability to assess all of these kinds of data, and so may place lower income borrowers at a disadvantage. Informants also raised concerns that these systems may allow lenders to reduce their underwriting staff because automated systems increase the productivity of individual underwriters. Lenders, informants pointed out, could reduce staff and only process applications identified by automated systems as requiring minimal further review. As a result, automated systems may make it harder for marginal applicants to receive personalized attention from an underwriter.” ” Representatives of the four holding companies that resulted from the mergers included in our study stated that they all used automated loan underwriting and credit-scoring systems to some degree. Three of the four holding companies said they have adopted a program in which loans that are not initially approved by their automated loan underwriting systems are subject to a secondary review by an experienced loan underwriter. Although the secondary review programs added additional costs and time to the process, the holding companies stated that it was necessary to guard against potential disparate impacts with respect to lending to minorities. Another concern that was raised by bank holding company officials that we met with involved a lender’s liability for the fair lending activities of mortgage brokers who are affiliated in some fashion with the lender. Although no standard definition of a mortgage broker exists, mortgage brokers are generally entities that provide mortgage origination or retail services and bring a borrower and a creditor together to obtain a loan from the lender (or funded by the lender). Typically, the lender decides whether to underwrite or fund the loan. HUD defines two categories of mortgage brokers. HUD’s narrowly defined category consists of entities that may have an agency relationship with the borrower in shopping for a loan and therefore have a responsibility to the borrower because of this agency representation. HUD’s broadly defined category consists of entities who do not represent the borrower but who may originate loans with borrowers utilizing funding sources in which the entity has a business relationship. The banking industry is concerned that lenders could be held liable for a fair lending violation resulting from the activity of a mortgage broker that provides origination or retail services for a lender. When lenders use mortgage brokers in providing mortgage credit, it is not always clear whether the lender, the mortgage broker, or both are responsible for the credit approval decision. FRB officials noted differences between the federal enforcement agencies and FRB with respect to the criteria used to determine when lenders are responsible for lending transactions involving brokers. Of the four holding companies resulting from the mergers in our study, three indicated that they use mortgage brokers. Officials of one of the holding companies we contacted said they wanted additional clarification from bank regulators regarding the bank’s liability for its lending decisions in transactions involving brokers because it used mortgage brokers extensively in making loans for manufactured housing and automobile loans. ECOA, as implemented by FRB’s Regulation B, defines a creditor as someone who “regularly participates” in credit-making decisions. Regulation B includes in the definition of creditor “a creditor’s assignee, transferee, or subrgee who so participates.” For purposes of determining if there is discrimination, the term creditor also includes “a person who, in the ordinary course of business, regularly refers applicants or prospective applicants to creditors, or selects or offers to select creditors to whom requests for credit may be made.” Regulation B states that “a person is not a creditor regarding any violation of ECOA or regulation B committed by another creditor unless the person knew or had reasonable notice of the act, policy, or practice that constituted the violation before becoming involved in the credit transaction.” This is referred to as the “reasonable notice” standard. On the basis of the definition of creditor contained in Regulation B and the specific facts, a mortgage broker can be considered a creditor and a lender can also be considered a creditor even if the transaction involves a mortgage broker. FRB noted that lenders have increasingly asked for guidance regarding the definition of a creditor as they expand their products and services. In March 1998, FRB issued an Advance Notice of Proposed Rulemaking that solicited comments related to the definition of “creditor” and other issues as part of its review of Regulation B. Specifically, FRB solicited comments on whether (1) it was feasible for the regulation to provide more specific guidance on the definition of a creditor; (2) the reasonable notice standard regarding a creditor’s liability should be modified; and (3) the regulation should address under what circumstances a creditor must monitor the pricing or other credit terms when another creditor (e.g., a loan broker) participates in the transactions and sets the terms. On August 4, 1999, FRB published proposed revisions to Regulation B that expand the definition of creditor to include a person who regularly participates in making credit decisions, including setting credit terms. In the Discussion of Proposed Revisions to the Official Staff Commentary (the Discussion), FRB stated that it believes that it is not possible to specify by regulation with any particularity the circumstances under which a creditor may or may not be liable for a violation committed by another creditor. Thus, FRB decided that Regulation B would retain the “reasonable notice” standard for when a creditor may be responsible for the discriminatory acts of other creditors. In the Discussion, FRB further stated that it believes that the reasonable notice standard may carry with it the need for a creditor to exercise some degree of diligence with respect to third parties’ involvement in credit transactions, such as brokers or the originators of loans. However, FRB believes that is not feasible to specify by regulatory interpretation the degree of care that a court may find required in specific cases. Opinions vary among regulatory agencies in terms of a lender’s liability in transactions that involve mortgage brokers. OCC and FRB share the view that a broker must be an agent of the lender, or the lender must have actual or imputed knowledge of a broker’s discriminatory actions, for a lender to share liability for discrimination by a broker. DOJ has taken the position that lenders are liable for all of their lending decisions, including those transactions involving mortgage brokers. In 1996, DOJ took one enforcement action involving a mortgage broker. The case involved mortgage company employees and brokers charging African-American, Hispanic, female, and older borrowers higher fees than were charged to younger, White males. HUD officials told us their agency has not taken a position on this issue. FTC officials told us that FTC has not taken any action that reflects a position on this issue. From 1995 through 1997, Federal Reserve Board (FRB) data indicated that home mortgage lending activity by institution type within the financial sector generally increased as measured by the total number of loans originated. Figure III.1 provides an overview of mortgage lending activity by financial sector. It shows that the bank sector originated more loans than the thrift sector or independent finance companies over this period when the large bank holding company mergers we studied occurred. As discussed previously, banking regulators (FRB, Office of the Comptroller of the Currency, the Federal Deposit Insurance Corporation, and the Office of Thrift Supervision) have the primary oversight responsibility for the bank and thrift sectors. The Federal Trade Commission (FTC) and the Department of Housing and Urban Development (HUD) are responsible for fair lending enforcement of independent finance companies, which are not addressed in this study. Figures III.2 and III.3 provide overviews of lending by components of the bank sector: banks, bank subsidiaries, and nonbank mortgage subsidiaries of bank holding companies. The home mortgage lending activity of the three components has remained relatively stable from 1995 to 1997. Figure III.2 shows that banks originated the most home mortgage loans in this period followed by bank subsidiaries and then nonbank mortgage subsidiaries of bank holding companies. Figure III.3 reveals the same pattern when dollar value of loans is considered. However, the data reveal larger percentages in the dollar value of home mortgage loan originations for both bank subsidiaries and bank holding company mortgage subsidiaries in comparison to the share of mortgage loan originations. The banking regulators are responsible for the fair lending oversight of the banks and bank subsidiaries; FTC and HUD are responsible for fair lending enforcement of the nonbank mortgage subsidiaries of bank holding companies. Because the nonbank mortgage subsidiaries of bank holding companies are not routinely examined for fair lending compliance by any federal regulatory or enforcement agencies, we analyzed their rate of growth compared to other bank sector lenders. Figure III.4 shows that in 1997, the percent change in loan originations by nonbank mortgage subsidiaries of bank holding companies was large in comparison to loan originations by banks and banking subsidiaries. Figure III.5 shows that the dollar value of mortgage loan originations has a pattern similar to the percentage change in loan originations. Figure III.4 and III.5 combined show an increasing presence in home mortgage lending by nonbank mortgage subsidiaries of bank holding companies. In addition to those named above, Harry Medina, Janet Fong, Christopher Henderson, Elizabeth Olivarez, and Desiree Whipple made key contributions to this report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch- tone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed large bank holding company mergers and regulatory enforcement of the Fair Housing Act and the Equal Credit Opportunity Act, focusing on the: (1) fair lending issues raid by consumer and community groups during the application process for six large bank holding company mergers; and (2) Federal Reserve Board's (FRB) consideration of those issues. GAO noted that: (1) in each of the six mergers, consumer and community groups raised the issue of perceived high loan denial and low lending rates to minorities by banks, bank subsidiaries, and nonbank mortgage subsidiaries involved in the mergers; (2) in four merger cases, community and consumer groups were concerned about alleged potential discriminatory practices of the holding companies' nonbank mortgage subsidiaries; (3) nonbank mortgage subsidiaries are not subject to routine examinations by federal regulators for compliance with fair lending and other consumer protection laws and regulations; (4) the fair lending laws generally confer enforcement, authority for nonbanking companies with the Federal Trade Commission, Department of Housing and Urban Development, or Department of Justice and do not specifically authorize any federal agency to conduct examinations of nonbanking companies for compliance with these laws; (5) the consumer and community groups were concerned that: (a) sub-prime lending activities of the nonbank mortgage subsidiaries had resulted or could result in minorities being charged disproportionately higher rates and fees; and (b) minority loan applicants were being "steered" between the affiliated banking or nonbank subsidiaries of the holding company to the lender that charged the highest rates or offered the least amount of services; (6) other fair lending issues included alleged discriminatory prescreening and marketing, low lending rates to minority-owned small businesses, discriminatory treatment of applicants, and redlining; (7) FRB considered these fair lending issues in the six merger cases by analyzing information from various sources, including the bank holding companies involved in the mergers and other federal and state agencies; (8) FRB staff analyzed Home Mortgage Disclosure Act data provided annually by the banks and nonbank mortgage subsidiaries involved in the mergers; (9) FRB staff stated that they placed heavy emphasis on prior and on-going compliance examinations performed by the appropriate primary banking regulators for the banks involved in the merger; (10) examinations for nonbank mortgage subsidiaries were generally not available because these entities are not routinely examined by any federal agency; (11) in two of the six mergers in GAO's review, FRB has previously performed compliance investigations of nonbank mortgage subsidiaries involved in the mergers; and (12) according to FRB staff, FRB had used its general examination and supervisory authority for bank holding companies to conduct these particular investigations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Student demographics: To be eligible for Job Corps, interested youths must be at least age 16 and not yet age 25 at the time of enrollment, and they must be considered low income and have an additional barrier to employment. These barriers include being a school dropout, a runaway, a foster child, a parent, or homeless. In program year 2007, the latest year for which data were available, approximately 60 percent of the students were male and 40 percent were female. The student population reflected diversity and approximately 75 percent of the students were nonwhite (see fig. 2). The percentages for student characteristics were calculated using the total number of students enrolled in Job Corps during program year 2007. Within the industry areas, the specific career choices at the centers we visited also varied. For example, all 6 of the centers that we visited with health care classes offered training as a certified nursing assistant, but only 1 center offered dental assistant training. In addition, 2 centers offered training as a medical assistant, and 3 offered training as a pharmacy technician. Similarly, the 6 centers with construction trades offered carpentry and facilities maintenance, and 4 offered painting and brick masonry. Only 1 center offered plumbing. The 6 centers with training in the hospitality industry offered culinary arts. (See app. III for a complete listing of career training offerings for each center that we visited.) Evaluations of Job Corps: Few evaluative studies have been conducted over the years to determine whether Job Corps is cost-effective, and, when these studies have been done, the results have been mixed. In 1982, Mathematica Policy Research, Inc., performed a return on investment analysis and concluded that Job Corps returned $1.46 to society for every $1.00 spent on the program. Later, Mathematica conducted another Job Corps study that was based on an experimental design where, from late 1994 to early 1996, nearly 81,000 eligible applicants nationwide were randomly assigned either to a program group, whose members were allowed to enroll in Job Corps, or to a control group, whose members were not enrolled in Job Corps. Mathematica followed its sample members for 4 years after their random assignments. In its report issued in 2001, Mathematica concluded that Job Corps was cost-effective in that the value of the benefits exceeded the costs of the program by about $17,000 per participant over his or her lifetime. Among its conclusions, Mathematica reported that Job Corps substantially increased the education and training services that youths improved these youths’ skills and educational attainment, generated employment and earnings gains, significantly reduced involvement with crime, was cost-effective despite its high costs, and was a good investment. Mathematica issued a follow-up report in 2006 that examined the results of the 1994- to 1996-study group over a longer period. In this report, Mathematica analyzed earnings and employment rates through 2004. While Mathematica found that some of the program results reported in 2001 persisted, such as improving educational attainments and reducing involvement in crime, overall earnings gains did not persist. Mathematica concluded that the benefits to society of Job Corps were smaller than the program costs, but acknowledged that the results reflect the program as it operated in 1994 to 1996 and not necessarily as it operates today. Currently, Labor does not have plans to conduct any further long-term evaluation of Job Corps. The Job Corps program has been operating at or near capacity for male residential students, but under capacity for female residential students during the last 3 program years. During those years, Job Corps overall achieved between 95 and 98 percent of the planned enrollment for male residential students, but achieved about 80 percent or less of the planned enrollment for female residential students (see fig. 6). In general, operating at or near capacity for female residential students has been challenging. The majority of outreach and admissions contractors we surveyed told us that recruiting female residential students was much more difficult during the most recently completed program year than recruiting male residential students. For example, 81 percent of these outreach and admissions contractors told us that recruiting female students into Job Corps was either moderately or very difficult versus 29 percent for male students. In addition, we found that while about 62 percent of the Job Corps centers were operating at or near capacity for male residential students in program year 2007, only about 17 percent of the centers were operating at or near capacity for female residential students. (See app. IV for more information on the planned and actual enrollment for male and female residential students, by center.) Moreover, about one-half of the 117 centers that enrolled female residential students in program year 2007 were below 80 percent of their planned enrollment for female residential students. Several centers achieved one-half or less than one-half of their planned enrollment for female residential students (see table 2). Operating at less than full capacity represents not only a lost opportunity to provide services to more youths in need of educational or career training, but also represents an inefficient use of resources. Because most of Job Corps’ operating costs are fixed, such as costs for heat, electricity, and staff salaries, these costs are incurred whether a center is full or not. In program year 2007, Job Corps’ operating costs were about $1.5 billion, with a planned enrollment of about 44,000 slots. Thus, on average, a slot costs about $34,000. In program year 2007, Job Corps had about 3,700 unfilled residential slots, about 90 percent of which were planned for female residential students. One factor affecting centers’ ability to operate at or near capacity is how long students stay in the program once enrolled. Job Corps is a self-paced program, and, as a result, the length of stay for students varies. On average, during program year 2007, Job Corps students remained in the program for about 8 months. Students leave the program for a variety of reasons. In program year 2007, about one-half of the students who left Job Corps were dismissed for violating program policies, such as those related to violence, and drug and alcohol use (discipline), or exceeding the number of unauthorized absences and being considered absent without leave, or AWOL. About 36 percent of the students separated as orderly completions—that is, they completed program requirements and left the ents and left the program as scheduled. (See fig. 7.) program as scheduled. (See fig. 7.) Nationally, there were some differences between male and female students in the reasons for leaving Job Corps. In program year 2007, a somewhat higher percentage of female students left the program as scheduled having completed program requirements (orderly completion). Furthermore, a higher percentage of female students were dismissed for violating the program’s policy for unauthorized absences, or AWOL, while a higher percentage of male students were dismissed from the program for violating program policies, such as those related to violence and drug and alcohol use (discipline) in program year 2007. (See fig. 8.) Three major factors affect the recruitment and retention of residential students, particularly female residential students, according to Job Corps officials. These key factors include the selection and availability of career training offerings, the availability of complete and accurate preenrollment information, and the quality of center life. The selection and availability of career training offerings in occupations of interest to students play a major role in Job Corps’ ability to recruit students, particularly female residential students. In particular, a large percentage of outreach and admissions contractors (91 percent) and center directors (79 percent) we surveyed cited the availability of particular career training offerings as very important in attracting female residential students to the program. Somewhat fewer officials rated this factor as very important for male residential students. (See fig. 9.) Providing training in careers that are attractive to women may enable Job Corps to recruit more female students. Many Job Corps officials we interviewed emphasized the importance of centers offering training in a range of careers that are attractive to female students, including training in the health care, business and finance, and hospitality industries. In program year 2007, about 80 percent of the graduates in health care training programs were women. (See fig. 10.) Many female students told us in focus groups that they were attracted to Job Corps because of the training offered in specific health care occupations, such as certified nursing assistant and pharmacy technician. Figure 11 contains photographs of health care training programs at 2 Job Corps centers that we visited where students practice in classrooms that resemble real-life settings. The centers we visited that offered a variety of health care training options had relatively higher female enrollment. For example, the 4 centers we visited that were operating above 80 percent of their planned enrollment for female residential students offered a variety of health care training programs. However, the centers we visited that were below 60 percent of their planned enrollment for female students offered few, if any, health care training options. (See table 3.) Another major factor affecting Job Corps’ ability to both recruit and retain residential students is the availability of accurate and complete preenrollment information for prospective students. Having accurate information prior to enrolling in Job Corps helps students choose the center that they think best meets their needs and helps establish realistic expectations for what it will be like to live and train at the center, according to officials that we interviewed. While accurate and complete preenrollment information is important for all students, regardless of gender, these officials reported that it is especially important to highlight certain aspects of the program, such as the living arrangements, for female students prior to enrollment. Most of the outreach and admissions contractors that we surveyed reported that certain aspects of the living arrangements, such as the condition of the living facilities (about 91 percent) and the number of students per dormitory room (about 74 percent), were very important in recruiting female residential students. A much lower percentage of outreach and admissions contractors reported that living arrangements were very important in recruiting male residential students. Having realistic expectations helps students adjust to Job Corps. According to officials that we interviewed, such expectations are key to students’ decision to remain in the program. Several officials we interviewed said that students who lack a complete understanding of what it will be like to live and train at a center prior to enrollment will be more likely to leave the program early. According to these officials, complete and accurate preenrollment information on all aspects of the program helps to preclude students from forming false expectations as well as prevents any major surprises when they arrive at a center. Furthermore, we found that the nature of the preenrollment information that students received varied. For example, one official we interviewed told us that he provided potential students with a handout containing detailed information on Job Corps training programs, including employment- related age restrictions for certain careers. Alternatively, another official provided prospective students with more general information on the program and available career training opportunities. In our focus groups, we found that several students did not receive complete and accurate information prior to enrolling in the program. For example, some female focus group participants at 1 center said that they were not told they would be sharing a dormitory room with seven other students. In another focus group, participants commented that they were not provided with complete information about specific center rules, such as cell phone use and acceptable attire. While they had decided to stay in Job Corps, these students acknowledged that the transition was difficult because they lacked realistic expectations. Preenrollment tours, virtual tours, and center videos can be important tools in establishing realistic expectations of Job Corps life. About 80 percent of the outreach and admissions contractors we surveyed reported that a preenrollment tour and a center video or virtual online tour are at least moderately important in helping female students make a realistic decision about enrolling at a particular center. Some officials we interviewed also said that preenrollment tours are very important because they provide students with an opportunity to see and experience what it is like to live and train at a particular center. Because of key center differences, such as size and appearance, several officials emphasized the importance of showing students the center where they plan to enroll to prevent false expectations. In fact, one center director did an analysis of all students who, from April 2008 through April 2009, left the center within 60 days of enrollment due to either resignation or AWOL separation, and found that about 70 percent of them had not taken a tour of the center. Some officials with whom we spoke acknowledged that center videos and virtual tours are useful recruitment strategies to provide students who are unable to participate in a preenrollment tour with an opportunity to see and experience center life. Once students enroll at a center, the quality of center life—such as a safe environment, consistent enforcement of center rules, and the availability of recreational and extracurricular activities—have a major effect on the retention of students, especially female residential students. In particular, center directors that we surveyed ranked several factors related to center life as especially important in retaining female residential students. For example, over 80 percent of the center directors we surveyed reported that safety, consistent enforcement of the center’s rules, and the condition of the living facilities are very important for retaining female residential students. (See fig. 12.) Maintaining a safe center environment and consistently enforcing center rules are both important factors in retaining residential students. Over 85 percent of center directors that we surveyed reported that safety was a major factor in the retention of female students in particular. In addition, our focus group participants commented on the importance of feeling safe while at the center. At 1 Job Corps center we visited, focus group participants said that center staff at all levels—including the center director, instructors, security staff, and facility maintenance personnel— work very hard to ensure a safe center environment by addressing student incidents in a timely manner. In our survey, 85 percent of center directors also reported that the consistent enforcement of center rules was very important in retaining female residential students. During our site visits, several officials said that the enforcement of center rules helped to create a center environment where female students felt safe on campus. Recreational and extracurricular activities are important for male and female residential students, but it is particularly important for centers to have specific activities for female students, according to many officials that we interviewed. To help retain female students in the program, most Job Corps centers we visited developed recreational and extracurricular activities. For example, officials at 1 center we visited said that they offer specific activities that may interest female students, such as volleyball, exercise classes, and talent shows. Female focus group participants at this center told us they appreciated the various types of available activities. Additionally, officials at another center said that they set aside specific days for female students to use the weight room to ensure that male students did not dominate the equipment. Labor has made some improvements to career training offerings, preenrollment information, and quality of center life in an effort to address issues related to the recruitment and retention of residential students. However, Labor has not reviewed nationally the training options that centers provide for female students or ensured that students receive detailed preenrollment information. Labor has gradually made more training opportunities available to Job Corps students that are likely to appeal to female students and lead to self- sufficiency. Job Corps began as a predominantly male program in the 1960s, and many of its training providers in the construction area have been involved with the program since the 1960s or 1970s. Over time, the program has increasingly provided training options that are often attractive to female students and result in jobs that are in demand. Many of the additions or expansions of course offerings are generated by individual centers. Centers submit a request to Labor that documents the demand and wages of the occupation and includes, among other things, statements from local employers and information on the local labor market, such as entry-level wages and job availability over the next 5 to 10 years. During program year 2007, Labor approved requests from 26 centers to add or expand their career training offerings, most commonly in the health care area. Some of the expansion of career training options has come through one of Job Corps’ regional initiatives that were begun as a result of new requirements by Labor. Under Labor’s “New Vision for Job Corps”—an effort that seeks to modernize the program, including its academics and career training options—Labor required each Job Corps regional office to submit a proposal for a regional initiative. These initiatives, or labs, form part of a broad strategy to align training content with industry standards and certifications. (See app. V for a listing of the regional initiatives approved by Labor.) One of Labor’s six Job Corps regions is implementing an initiative focused on developing training paths through additional training and forging employer partnerships, particularly in the automotive and health care industries. This initiative allows students to pursue an incremental course of study that links different centers in certain training areas. For example, a student may enroll in a medical assistant program, and could obtain additional training and certifications in such areas as pharmacy technician or phlebotomist, even if the additional training was offered at a different center. The region is also partnering with an ambulance company to start offering basic emergency medical technician and advanced paramedic training at a few centers. This company plans to hire students who complete the training. As a result, regional officials told us that they expect the initiative to increase both male and female student enrollment and to have a positive impact on graduates’ long-term earnings. While these initiatives show promise in expanding career training options that will both attract more female students and have better linkages to local employers, they are limited in scope. Labor officials noted that centers and regional Labor offices try to offer a mix of training, including options appealing to women. However, Labor has not been strategic in how it addresses issues related to female recruitment and retention, nor has it examined how the mix of career training offerings nationwide might be a factor. Typically, Labor waits until a center requests to add or expand a career training option before it responds. Labor has not conducted a center-by-center review of career training options at a national level to determine whether centers struggling with female recruitment and retention should modify their career training options to make them more attractive to women. Such a review could identify training gaps and could help centers in their efforts to operate at or near capacity, especially for female students. Labor has begun to take some steps to ensure that potential students receive consistent information about Job Corps prior to enrollment. Labor’s national office has assumed responsibility for the mass marketing of Job Corps in an effort to efficiently and economically provide a consistent general message about the program. Labor’s marketing contractor has produced print materials along with television and radio advertisements that include a national toll-free telephone number so that interested youths may obtain more information and contact a local outreach and admissions contractor. Some of these national marketing materials specifically target potential female residential students. However, these materials do not describe particular centers in detail. One of the Job Corps regions has begun to implement an initiative that, among other things, requires outreach and admissions contractors to discuss detailed information with students prior to enrollment. This detailed information covers rules about acceptable student conduct, including policies on smoking and appropriate dress, and about career training opportunities, including industry certifications or advanced training. In addition, outreach and admissions contractors are required to show potential students a video about these rules and to have students sign an agreement to adhere to them. Officials said they believe that this process helps students understand and commit to the rules. Outreach and admissions contractors in this region said that the initiative has made it easier for them to discuss the realities and benefits of Job Corps with potential students and employers. This initiative may help ensure consistent communication of the rules and benefits of Job Corps overall, but it does not provide specific information about life at a particular center, such as the number of students sharing a dormitory room or the available recreational activities. Job Corps officials generally agree that an effective way for students to have realistic expectations about life at a Job Corps center is for them to visit the center prior to enrolling. This is not always possible, however, and virtual tours or videos of centers of interest can be a valuable means of providing potential students with detailed preenrollment information. Many officials—including center directors, outreach and admissions contractors, and Labor officials—told us they believe having a virtual or video tour of centers would help interested students obtain a more realistic expectation of center life when they are unable to visit the center. Labor’s marketing contractor conducted several focus groups in program year 2008 and found that center-specific virtual or video tours may help reduce students’ false expectations. In addition, Labor’s Advisory Committee on Job Corps confirmed the importance of virtual or video tours, noting that such tours may help increase student retention. In 2009, Labor launched a revamped national Job Corps Web site, allowing individual centers to have links posted to their approved center Web sites. As of March 2009, 72 Job Corps centers had their Web sites approved by Labor, but none of these sites had a virtual tour. While Labor officials acknowledged the value of providing such center-specific information, Labor estimates that the costs of creating a virtual or video tour for every center would total approximately $1 million. Currently, Labor is exploring less costly alternatives. Labor has several efforts under way to improve the quality of Job Corps center life for students. Among these are efforts designed to promote a safe environment. For example, to assess student safety, Labor requires centers to administer a quarterly survey to students to gauge how safe they feel. According to Labor, the department uses the survey results as a way to monitor student safety; recommend corrective action, as needed; and evaluate center operators. In addition, Labor has an initiative with one outreach and admissions contractor to help enforce the program’s policy of zero tolerance of drugs by testing students prior to enrollment and delaying enrollment if they test positive. Labor officials noted that this preenrollment test may initially deter some students from entering the program, but it may also increase student retention by (1) reducing terminations from drug use or violence and (2) improving the safety and learning environment for male and female residential students. Single parents who participate in Job Corps have unique quality of life needs. Approximately 1 in 10 of the female students in Job Corps in program year 2007 were single parents, and officials noted that these students face an additional barrier to participating in the program due to their need for child care. Labor helps Job Corps centers address this need by allowing centers to establish child care facilities and single-parent dormitories. Also, Labor provides funds for the construction of approved facilities and for their ongoing maintenance and utilities. Twenty-eight centers currently provide on-site child care, most often for children of nonresidential students. Seven centers also have single-parent dormitories for parents and children. One center that we visited has a single-parent dormitory for 32 students, in which a parent in the program typically has a private room and bathroom for herself and her child, along with a kitchen shared with another parent. This center also has a child development center for children age 6 weeks to age 5, with staff to look after children while parents are in academic or training classes during weekdays. While Labor provides some funds for these programs, the department does not provide funds to support the ongoing costs, such as staff salaries or food for the children. Funds for these costs come from different sources, such as Temporary Assistance for Needy Families, Head Start, and child care assistance funds. Labor officials noted that providing single-parent dormitories and child care centers is expensive, but is important to the recruitment and retention of female residential students. Job Corps fills a unique role in preparing economically disadvantaged young men and women to enter the workforce. The services that the program provides to these youths are among the most comprehensive in the federal government—combining academic, vocational, and social skills training in a residential setting where staff are available 24 hours a day. Because of these services, Job Corps is the most expensive federal job training program, with the cost of each training slot averaging about $34,000. Because much of the program’s costs are fixed, program efficiency is compromised when Job Corps centers operate under capacity. Operating under capacity represents a missed opportunity to train students who might benefit from the program. Our findings suggest that, while the program nearly achieves its planned enrollment for males, it is struggling in this area with regards to female enrollment. We found clear consensus among Job Corps officials, outreach and admissions contractors, and students that having career training options attractive to women is key to being able to recruit female students into the program. However, while centers have been adding such training, particularly in the health care area, this approach has not been universal and some centers continue to have difficulty in attracting female students. Labor has not taken a strategic approach to address this problem nationwide. The department may continue to struggle with female enrollment if it does not do a thorough review of career training offerings to determine where adjustments could be made that may enhance the ability of the program to attract women. Job Corps centers vary widely in terms of facilities; living conditions; and, to some extent, the rules that guide daily life at the center. Officials at all levels affirmed the need for students to have, prior to enrolling in the program, a clear understanding of what it would be like to live and train at a center. These officials told us that students who do not have that opportunity have a more difficult transition and are more likely to leave short of completing the program. Yet, we found that students are not always given the sort of preenrollment information they need to make a good decision. Also, although Labor has taken some steps to encourage outreach and admissions contractors to provide more complete preenrollment information, more could be done to ensure that all students receive consistent and complete information before enrolling. Absent additional steps, Job Corps will likely continue to face difficulty in recruiting and retaining students, particularly female students. To improve the recruitment and retention of residential students, we recommend that the Secretary of Labor take the following three actions: review the availability and selection of career training offerings at centers—particularly those centers that are experiencing difficulty with female enrollment— and assess whether centers need to adjust their career training options to offer more career training that is both attractive to women and that could lead to careers that will enable women to become self-sufficient; expand current efforts to ensure that outreach and admissions contractors across all six regions consistently provide potential students with complete and accurate information on all aspects of Job Corps, including providing specific information about the center in which the student will be enrolled; and explore the feasibility or cost-effectiveness of developing video or online virtual tours for all centers. We provided a draft of this report to the Department of Labor for review and comment. Labor did not comment on our findings and generally agreed with our recommendations. Appendix VI contains a reprint of Labor’s comments. In addition, we provided drafts to the U.S. Department of Agriculture and the Department of the Interior for technical comments, but the departments did not provide any comments. In its response, Labor concurred in part with our recommendation that it review the availability and selection of career training offerings. Labor acknowledged the need to offer “female friendly” career training programs to increase female enrollment. However, Labor noted that in selecting new offerings, it routinely looks beyond those considered traditional occupations for females as they seek to maximize opportunities that may result in long-term self-sufficiency. We concur with the need to focus training in areas that lead to self-sufficiency and acknowledge the need to offer training in nontraditional occupations for women. However, it is possible to offer training, such as in the health care industry, that is attractive to women and that leads to self-sufficiency. We continue to believe that a more systematic assessment of career training offered at the centers is needed, particularly at those centers that are struggling with female enrollment. Such an assessment would identify whether centers need to adjust their career training options to enhance female enrollment. Labor concurred with our recommendation to expand current efforts to ensure that outreach and admissions contractors consistently provide potential students with complete and accurate information on all aspects of Job Corps. Labor acknowledged the importance of providing complete and accurate information and identified several ongoing initiatives, including a new national recruitment Web site that contains links to individual centers. We noted these efforts in our report; however, as of March 2009, 50 of the 122 centers were not linked to this Web site. We encourage Labor to continue to expand its efforts to require that each potential applicant is provided with complete and accurate preenrollment information. While Labor concurred with our recommendation to explore the feasibility or cost-effectiveness of developing video or online virtual tours for all centers, officials did not provide information about the steps they are planning to take to address the recommendation. Labor acknowledged the importance of prospective students being able to tour centers prior to enrolling, but noted that this is not always possible. As we have previously reported, virtual tours provide an alternative to students who are unable to physically tour the center in which they plan to enroll. Labor officials estimated that it would cost approximately $1 million to produce a virtual tour of all centers. In its comments, Labor noted that it is currently exploring less costly options, including a short DVD that will combine an overview of the Job Corps program, while using still photography to highlight information about individual centers. We are concerned that such an approach will not be sufficient to provide a realistic preview of life at a specific center for prospective students who are unable to visit the center. We acknowledge that to produce a virtual tour for each center is not without cost, but stress the importance of assessing the feasibility and benefits, as well as the costs, of such an endeavor before moving forward. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the relevant congressional committees, the Secretary of Labor, the Secretary of Agriculture, the Secretary of Interior, and other interested parties. The report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. To better understand the recruitment and retention of residential students, we were asked to provide information on the (1) extent to which Job Corps centers are operating at or near capacity for residential students; (2) major factors that affect centers’ ability to recruit and retain residential students, particularly female residential students; and (3) steps, if any, the Department of Labor (Labor) has taken to address the recruitment and retention of residential students. Because nearly 90 percent of Job Corps’ planned enrollment is residential, our review focused on those Job Corps centers that provide educational and career technical training to male and female residential students. To answer our objectives, we administered two Web-based surveys—one to Job Corps’ outreach and admissions contractors and one to Job Corps’ center directors. We also visited 7 Job Corps centers in six states and 4 outreach and admissions contractors responsible for recruiting residential students for these centers. In addition, we analyzed Labor data identifying planned residential capacity for male and female students and the average actual number of male and female residential students onboard for each of the 122 centers. Furthermore, we interviewed Job Corps officials at the national and regional levels to identify Labor’s current efforts under way to improve the recruitment and retention of residential students. To obtain information on the major factors that affect the recruitment and retention of residential students, we administered two Web-based surveys. One survey was sent to the 32 outreach and admissions contractors that had a contract with Labor to recruit male and female residential students for Job Corps during program year 2007. Typically these contractors are responsible for recruiting residential students for centers located in the same state, but several have multiple-state responsibility. We received a 100 percent response rate on this survey, with responses from all 32 outreach and admissions contractors. The second survey was distributed to the 117 Job Corps center directors who were responsible for enrolling and retaining residential students during program year 2007. Of the 117 Job Corps centers contacted, 114 responded to our survey, for a response rate of 97 percent. To field the surveys, we obtained a list and contact information for the 117 Job Corps center directors and 32 outreach and admissions contractors from Job Corps’ national and regional offices. In some cases, we contacted the Job Corps centers directly to determine the appropriate contact information. We collected the survey data from August 2008 to October 2008. Both surveys contained a section on the recruitment of male and female residential students. We obtained the perspectives of Job Corps center directors and outreach and admissions contractors on the major factors that affect the recruitment of residential students; challenges encountered in recruiting residential students, particularly female students; and successful approaches or center features that may attract residential students to Job Corps. In addition, on the Job Corps center directors’ survey, we included a section with questions related to the retention of male and female residential students. We did not include these questions on the outreach and admissions contractors’ survey, because these officials are not responsible for acclimating and retaining residential students once they are at a particular center. Similar to the section on recruitment, we asked Job Corps center directors about the major factors that affect the retention of residential students; challenges encountered in retaining students, particularly female students; and successful approaches or center features that may retain residential students. Because this was not a sample survey, it has no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in interpreting a particular question, sources of information available to respondents, or data entry and analysis can introduce unwanted variability into the survey results. We took steps in developing the questionnaires, collecting data, and analyzing these data to minimize such nonsampling errors. For example, prior to administering the surveys, GAO survey specialists designed each questionnaire in collaboration with GAO subject matter experts. We also pretested the outreach and admissions survey with 2 outreach and admissions contractors and the center directors’ survey with 2 Job Corps directors. On the basis of the findings from pretests, we modified our questionnaire to ensure that the questions were relevant, clearly stated, and easy to comprehend. To ensure adequate response rates, we sent e-mail reminders and conducted follow-up telephone calls with nonrespondents. When the data were analyzed, a second independent data analyst checked all computer programs for accuracy. Since these were Web-based surveys, respondents entered their answers directly into the electronic questionnaires, eliminating the need to key data into a database, thereby minimizing errors. To further enhance our understanding of the recruitment and retention of residential students, we visited 7 Job Corps centers in six states— Connecticut, Idaho, Iowa, Kentucky, Massachusetts, and Washington State. We selected these centers because of their geographic variation and to provide a mix of privately and federally operated centers that have varying levels of success in maintaining male and female residential capacity. In addition, we selected the Denison Job Corps Center because it is 1 of 7 centers that has a single-parent dormitory and a day-care center for children of residential students. These living arrangements and supports allow single parents to live at the center with their children while they complete their education and career training. (See table 4 for key characteristics of the Job Corps centers that we visited.) Because a complete and current listing of career training offerings by Job Corps center was not available, we followed up with each Job Corps center director that we visited to ensure we had an accurate list of career training being offered at his or her center. During our site visits, we toured each center’s facilities and interviewed the center director using a structured interview protocol to obtain his or her views on residential student recruitment and retention. To the extent that center directors’ survey responses were available, we used this information to supplement our discussion and to gain further insight into the major factors and challenges associated with attracting and retaining residential students, particularly female students. We also conducted two focus groups with female residential students at 6 of the 7 Job Corps centers we visited. Each of our focus groups comprised 6 to 10 female residents who had been at the center for at least 60 days. In total, over 100 female residential students participated in our focus groups. For each focus group, we used a series of semistructured questions to learn about the students’ experiences when they were recruited for Job Corps and to obtain their views on the enrollment process and information provided by outreach and admissions contractors. We also asked the students to identify the major factors that were important in their decisions to enroll and stay at the center. Furthermore, we conducted site visits with the 4 outreach and admissions contractors that are responsible for recruiting residential students to the 7 Job Corps centers we visited. (See table 5 for a list of these outreach and admissions contractors and areas of responsibility.) We interviewed these officials using a semistructured interview protocol to obtain information on their recruitment and outreach efforts and how they balance providing students with their desired center and career training program. We also asked these officials about the major factors and challenges that affect residential student recruitment. To the extent possible, we used officials’ survey responses to supplement our discussion. We reviewed available Job Corps’ student demographic and administrative data for program years 2006 and 2007 to provide descriptive information on the characteristics of students served, student enrollment and retention, and career training slots and industry areas. Before deciding to use the data, we reviewed prior GAO assessments performed under a previous engagement to determine their reliability. These assessments were based on observing a demonstration of the Job Corps database, interviewing Labor officials to identify data checks in place to ensure the integrity of the data, and reviewing relevant internal control policies and procedures. On the basis of our review of these assessments, we determined that the data for program years 2006 and 2007 were sufficiently reliable for the purposes of our review. To determine the extent to which Job Corps centers operate at or near capacity, we analyzed Job Corps’ onboard strength reports that identified the planned enrollment for male and female residential students and the average actual number of male and female residential students onboard for each of the 122 centers. Our analysis covered July 1, 2005, through June 30, 2008—the 3 most recently completed program years (program years 2005 to 2007). We also reviewed student leave and separation data to describe the reasons why male and female residential students left the program. To determine the reliability of the data, we interviewed knowledgeable Labor officials and reviewed prior GAO assessments performed under a previous engagement as we have previously described. These assessments were based on observing a demonstration of the Job Corps database, interviewing Labor officials to identify data checks in place to ensure the integrity of the data, and reviewing relevant internal control policies and procedures. On the basis of this information, we determined that the data for program years 2005 to 2007 were sufficiently reliable for the purposes of our review. To obtain information on Labor’s efforts to address the recruitment and retention of residential students, we interviewed Labor officials located at the national office and six regional offices—Atlanta, Boston, Chicago, Dallas, Philadelphia, and San Francisco. Specifically, we asked officials about current efforts under way at the national or regional levels to improve centers’ ability to recruit and retain residential students, particularly female residential students. In addition, we reviewed relevant documentation provided by officials to obtain a better understanding of the purpose and status of these efforts. We also reviewed Labor’s policies governing Job Corps, national marketing materials, and reports on regional initiatives. We conducted this performance audit from May 2008 to June 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Civilian Conservation Center. Center was closed during program year 2007. Center has only nonresidential students. Center has male and female nonresidential students, but only female residential students. same or another center, especially in the health care and automotive industries. Establish rules that all centers in the region will implement and require outreach and admissions contractors to provide students with detailed information on these rules prior to enrollment. Offer more recreational and learning activities during evenings and weekends. Prioritize tutoring for students with tests scores in the lowest quartile to improve academic achievement and retention. Develop and deploy professional development for staff who help students during unstructured times and influence their retention, such as residential and recreational staff. Enhance coordination between these staff and instructors. Create a more positive student culture based on shared norms, rather than rules and discipline, through activities such as training of Job Corps staff, peer counseling for poorly performing students, and facilitating group discussions each day for students to address concerns. Assess student interests and aptitude more thoroughly to select a career training option during the career preparation period. Provide intensive drug counseling for students who test positive for drugs upon entering the program. Strengthen collaboration between academics and career training in the industry area of health care, such as the vocabulary needed for training. Enroll students in this industry area as a cohort rather than the traditional progression of open- entry, open-exit for each student. Dianne Blank, Assistant Director, and Wayne Sylvia, Analyst-in-Charge, managed all aspects of this assignment. Also, Matthew Saradjian and Ashanta Williams made significant contributions to this report in all aspects of our work. In addition, Shana Wallace provided methodological assistance; Stuart Kaufman assisted in the design of the two national surveys; Catherine Hurley analyzed responses from the national surveys; Mimi Nguyen provided graphic design assistance; Jessica Botsford provided legal support; Jessica Orr provided writing assistance; and Sara Edmondson verified our findings. Job Corps: Links With Labor Market Improved but Vocational Training Performance Overstated. GAO/HEHS-99-15. Washington, D.C.: November 4, 1998. Job Corps: Vocational Training Performance Data Overstate Program Success. GAO/T-HEHS-98-218. Washington, D.C.: July 29, 1998. Job Corps: Participant Selection and Performance Measurement Need to Be Improved. GAO/T-HEHS-98-37. Washington, D.C.: October 23, 1997. Job Corps: Need for Better Enrollment Guidance and Improved Placement Measures. GAO/HEHS-98-1. Washington, D.C.: October 21, 1997. Job Corps: Where Participants Are Recruited, Trained, and Placed in Jobs. GAO/HEHS-96-140. Washington, D.C.: July 17, 1996. Job Corps: Comparison of Federal Program With State Youth Training Initiatives. GAO/HEHS-96-92. Washington, D.C.: March 28, 1996. Job Corps: High Costs and Mixed Results Raise Questions About Program’s Effectiveness. GAO/HEHS-95-180. Washington, D.C.: June 30, 1995. | Established in 1964, Job Corps is the nation's largest residential, educational, and career training program for economically disadvantaged youths. Administered by the Department of Labor (Labor), Job Corps received about $1.6 billion in program year 2007 and served about 60,000 students. Some have expressed concern that Job Corps centers are not meeting planned enrollment goals, particularly for women. To address these concerns, GAO reviewed the (1) extent to which Job Corps centers are operating at or near capacity for residential students; (2) major factors that affect the recruitment and retention of residential students, particularly females; and (3) steps, if any, Labor has taken to address the recruitment and retention of residential students. To address these objectives, GAO analyzed Labor's enrollment data, surveyed Job Corps recruiters and center directors, and visited seven Job Corps centers. Overall, the Job Corps program has been operating at or near capacity for male residential students, but under capacity for female residential students during program years 2005 through 2007. During each of those years, Job Corps achieved between 95 and 98 percent of the planned enrollment for male residential students nationwide, but about 80 percent or less for female residential students. In fact, about one-half of the centers that enrolled female residential students in program year 2007 were below 80 percent of their planned enrollment for that group. Three key factors affect Job Corps' ability to recruit and retain residential students, particularly female residential students--availability of career training options, complete and accurate preenrollment information, and quality of center life. The selection and availability of career training offerings in occupations of interest to students play a major role in Job Corps' ability to recruit students, particularly female residential students, according to officials that we surveyed. A key factor affecting both recruitment and retention is ensuring that students have accurate preenrollment information about Job Corps. Officials noted that having realistic expectations of life at a center is especially important for female students. Finally, center officials said that the quality of life at the centers, including the living conditions and the sense of safety, affects students' willingness to stay in the program. Labor has begun making improvements in career training offerings, preenrollment information, and quality of center life in an effort to address issues related to the recruitment and retention of residential students. While Labor has gradually made more training opportunities available that are likely to appeal to female students, these are typically at a center's request and not part of an overall strategy. In addition, Labor has taken some steps to ensure that students receive detailed preenrollment information, but has not yet expanded these efforts nationally. Finally, Labor has several efforts under way to improve the quality of center life for students, including ensuring a drug-free environment and providing child care facilities for single parents. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
benefit package, largely designed in 1965, provides virtually no coverage. In 1996, almost one third of beneficiaries had employer- sponsored health coverage, as retirees, that included drug benefits. More than 10 percent of beneficiaries received coverage through Medicaid or other public programs. To protect against drug costs, the remainder of Medicare beneficiaries can choose to enroll in a Medicare+Choice plan with drug coverage if one is available in their area or purchase a Medigap policy. 3 The availability, breadth, and price of such coverage is changing as the costs of expanded prescription drug use drives employers, insurers, and managed care plans to adopt new approaches to control the expenditures for this benefit. These approaches, in turn, are reshaping the drug market. Over the past 5 years, prescription drug expenditures have grown substantially, both in total and as a share of all health care outlays. Prescription drug spending grew an average of 12.4 percent per year from 1993 to 1998, compared with a 5 percent average annual growth rate for health care expenditures overall. (See table 1.) As a result, prescription drugs account for a larger share of total health care spending—rising from 5.6 percent to 7.9 percent in 1998. As an alternative to traditional Medicare fee-for-service, beneficiaries in Medicare+Choice plans (formerly Medicare risk health maintenance organizations) obtain all their services through a managed care organization and Medicare makes a monthly capitation payment to the plan on their behalf. Prescription Drugs: Increasing Medicare Beneficiary Access and Related Implications Prescription drug expenditures (in billions) Annual growth in prescription drug expenditures (percent) Annual growth in all health care expenditures (percent) Total drug expenditures have been driven up by both greater utilization of drugs and the substitution of higher-priced new drugs for lower-priced existing drugs. Private insurance coverage for prescription drugs has likely contributed to the rise in spending, because insured consumers are shielded from the direct costs of prescription drugs. In the decade between 1988 and 1998, the share of prescription drug expenditures paid by private health insurers rose from almost a third to more than half. (See fig. 1.) The development of new, more expensive drug therapies— including new drugs that replace old drugs and new drugs that treat disease more effectively—also contributed to the drug spending growth by boosting the volume of drugs used as well as the average price for drugs used. The average number of new drugs entering the market each year rose from 24 at the beginning of the 1990s to 33 now. Similarly, biotechnology advances and a growing knowledge of the human immune system are significantly shaping the discovery, design, and production of drugs. Advertising pitched to consumers has also likely upped the use of prescription drugs. A recent study found that the 10 drugs most heavily advertised directly to consumers in 1998 accounted for about 22 percent of the total increase in drug spending between 1993 and 1998.Between March 1998 and March 1999, industry spending on advertising grew 16 percent to $1.5 billion. All of these factors suggest the need for effective cost control mechanisms to be in place under any option to increase access to prescription drugs. Medicare beneficiaries spent $2,000 or more.A recent report had projected that by 1999 an estimated 20 percent of Medicare beneficiaries would have total drug costs of $1,500 or more—a substantial sum for people lacking some form of insurance to subsidize their purchases or for those facing coverage limits. In 1996, almost a third of Medicare beneficiaries lacked drug coverage altogether. (See fig. 2.) The remaining two-thirds had at least some drug coverage—most commonly through employer-sponsored health plans. The proportion of beneficiaries who had drug coverage rose between 1995 and 1996, owing to increases in those with Medicare HMOs, individually purchased supplemental coverage, and employer-sponsored coverage. However, recent evidence indicates that this trend of expanding drug coverage is unlikely to continue. Medicare+Choice plans have found drug coverage to be an attractive benefit that beneficiaries seek out when choosing to enroll in managed care organizations. However, owing to rising drug expenditures and their effect on plan costs, the drug benefits the plans offer are becoming less generous. Many plans restructured drug benefits in 2000, increasing enrollees’ out-of-pocket costs and limiting their total drug coverage. Beneficiaries may purchase Medigap policies that provide drug coverage, although this tends to be expensive, involves significant cost-sharing, and includes annual limits. Standard Medigap drug policies include a $250 deductible, a 50 percent coinsurance requirement, and a $1,250 or $3,000 annual limit. Furthermore, Medigap premiums have been increasing in recent years. In 1999, the annual premium for one type of Medigap policy with a $1,250 annual limit on drug coverage, ranged from approximately $1,000 to $6,000. All beneficiaries who have full Medicaid benefitsreceive drug coverage that is subject to few limits and low cost-sharing requirements. For beneficiaries whose incomes are slightly higher than Medicaid standards, 14 states currently offer pharmacy assistance programs that provided drug coverage to approximately 750,000 beneficiaries in 1997. The three largest state programs accounted for 77 percent of all state pharmacy assistance program beneficiaries. Most state pharmacy assistance programs, like Medicaid, have few coverage limitations. The burden of prescription drug costs falls most heavily on the Medicare beneficiaries who lack drug coverage or who have substantial health care needs. Drug coverage is less prevalent among beneficiaries with lower incomes. In 1995, 38 percent of beneficiaries with income below $20,000 were without drug coverage, compared to 30 percent of beneficiaries with higher incomes. Additionally, the 1995 data show that drug coverage is slightly higher among those with poorer self-reported health status. At the same time, however, beneficiaries without drug coverage and in poor health had drug expenditures that were $400 lower than the expenditures of beneficiaries with drug coverage and in poor health. This might indicate access problems for this segment of the population. employer-sponsored benefits, Medigap policies, and most recently, Medicare+Choice plans. Although reasonable cost sharing serves to make the consumer a more prudent purchaser, copayments, deductibles, and annual coverage limits can reduce the value of drug coverage to the beneficiary. Harder to measure is the effect on beneficiaries of drug benefit restrictions brought about through formularies designed to limit or influence the choice of drugs. During this period of rising prescription drug expenditures, third-party payers have pursued various approaches to control spending. These efforts have initiated a transformation of the pharmaceutical market. Whereas insured individuals formerly purchased drugs at retail prices at pharmacies and then sought reimbursement, now third-party payers influence which drug is purchased, how much is paid for it, and where it is purchased. A common technique to manage pharmacy care and control costs is to use a formulary. A formulary is a list of prescription drugs, grouped by therapeutic class, that a health plan or insurer prefers and may encourage doctors to prescribe. Decisions about which drugs to include in a formulary are based on the drugs’ medical value and price. The inclusion of a drug in a formulary and its cost can affect how frequently it is prescribed and purchased and, therefore, can affect its market share. Formularies can be open, incentive-based, or closed. Open formularies are often referred to as “voluntary” because enrollees are not penalized if their physicians prescribe nonformulary drugs. Incentive-based formularies generally offer enrollees lower copayments for the preferred formulary or generic drugs. Incentive-based or managed formularies are becoming more popular because they combine flexibility and greater cost-control features than open formularies. A closed formulary limits insurance coverage to the formulary drugs and requires enrollees to pay the full cost of nonformulary drugs prescribed by their physicians. Another way in which the market has been transformed is through the use of pharmacy benefit managers (PBM) by health plans and insurers to administer and manage prescription drug benefits. PBMs offer a range of services, including prescription claims processing, mail-service pharmacy, formulary development and management, pharmacy network development, generic substitution incentives, and drug utilization review. PBMs also negotiate discounts and rebates on prescription drugs with manufacturers. Expanding access to more affordable prescription drugs could involve either subsidizing prescription drug coverage or allowing beneficiaries access to discounted pharmaceutical prices. The design of a drug coverage option, that is, the scope of the benefit, the covered population, and the mechanisms used to contain costs, as well as its implementation will determine the effect of the option on beneficiaries, Medicare or federal spending, and the pharmaceutical market. A new benefit would need to be crafted to balance competing concerns about the sustainability of Medicare, federal obligations, and the hardship faced by some beneficiaries. Similarly, the effect of granting some beneficiaries access to discounted prices will hinge on details such as the price of the drugs after the discount, how discounts are determined and secured, and which beneficiaries are eligible. The relative merits of any approach should be carefully assessed. We suggest that the following five criteria be considered in evaluating any option. (1) Affordability: an option should be evaluated in terms of its effect on public outlays for the long term. (2) Equity: an option should provide equitable access across groups of beneficiaries and be fair to affected providers. (3) Adequacy: an option should provide appropriate beneficiary incentives for prudent utilization, support standard treatment options for beneficiaries, and not impede effective and clinically meaningful innovations. (4) Feasibility: an option should incorporate such administrative essentials as implementation and cost and quality monitoring techniques. (5) Acceptance: an option should account for the need to educate the beneficiary and provider communities about its costs and the realities of trade-offs required by significant policy changes. drug costs, which is yet to be designed. Under the Breaux-Frist approach, competing health plans could design their own copayment structure, with requirements on the benefit’s actuarial value but no provision to limit beneficiary catastrophic drug costs. Benefit cost-control provisions for the traditional Medicare program may present some of the thorniest drug benefit design decisions. Recent experience provides two general approaches. One would involve the Medicare program obtaining price discounts from manufacturers. Such an arrangement could be modeled after Medicaid’s drug rebate program. While the discounts in aggregate would likely be substantial, this approach lacks the flexibility to achieve the greatest control over spending. It could not effectively influence or steer utilization because it does not include incentives that would encourage beneficiaries to make cost-conscious decisions. The second approach would draw from private sector experience in negotiating price discounts from manufacturers in exchange for shifting market share. Some plans and insurers employ PBMs to manage their drug benefits, including claims processing, negotiating with manufacturers, establishing lists of drug products that are preferred because of efficacy or price, and developing beneficiary incentive approaches to control spending and use. Applying these techniques to the entire Medicare program, however, would be difficult because of its size, the need for transparency in its actions, and the imperative for equity for its beneficiaries. As the largest government payer for prescription drugs, Medicaid drug expenditures account for about 17 percent of the domestic pharmaceutical market. Before the enactment of the Medicaid drug rebate program under the Omnibus Budget Reconciliation Act of 1990 (OBRA), state Medicaid programs paid close to retail prices for outpatient drugs. Other large purchasers, such as HMOs and hospitals, negotiated discounts with manufacturers and paid considerably less. The rebate program required drug manufacturers to rebate to state Medicaid programs a percentage off of the average price wholesalers pay manufacturers. The rebates were based on a percentage reduction that reflects the lowest or “best” prices the manufacturer charged other purchasers and the volume of purchases by Medicaid recipients. In return for the rebates, state Medicaid programs must cover all drugs manufactured by pharmaceutical companies that entered into rebate agreements with HCFA. After the rebate program’s enactment, a number of market changes affected other purchasers of prescription drugs and the amount of the rebates that Medicaid programs received. Drug manufacturers substantially reduced the price discounts they offered to many large private purchasers, such as HMOs. Therefore, the market quickly adjusted by increasing drug prices to compensate for rebates obtained by the Medicaid program. Although the states have received billions of dollars in rebates from drug manufacturers since OBRA’s enactment, state Medicaid directors have expressed concerns about the rebate program. The principal concern involves OBRA’s requirement to provide access to all the drugs of every manufacturer that offers rebates, which limits the utilization controls Medicaid programs can use at a time when prescription drug expenditures are rapidly increasing. Although the programs can require recipients to obtain prior authorization for particular drugs and can impose monthly limits on the number of covered prescriptions, they cannot take advantage of other techniques, such as incentive-based formularies, to steer recipients to less expensive drugs. The few cost-control strategies available to state Medicaid programs can add to the administrative burden on state Medicaid programs. Other payers, such as private and federal employer health plans and Medicare+Choice plans, have taken a different approach to managing their prescription drug benefits. They typically use beneficiary copayments to control prescription drug use, and they use formularies to both control use and obtain better prices by concentrating purchases on selected drugs. In many cases, these plans and insurers retain a PBM’s services to manage their pharmacy benefit and control spending. Beneficiary cost-sharing plays a central role in attempting to influence drug utilization. Copayments are frequently structured to influence both the choice of drugs and the purchasing arrangements. While formulary restrictions can channel purchases to preferred drugs, closed formularies, which provide reimbursement only for preferred drugs, have generated substantial dissatisfaction among consumers. As a result, many plans link their cost-sharing requirements and formulary lists. The fastest growing trend today is the use of a formulary that covers all drugs but that includes beneficiary cost-sharing that varies for different drugs—typically a smaller copayment for generic drugs, a larger one for preferred drugs, and an even larger one for all other drugs. Reduced copayments have also been used to encourage enrollees using maintenance drugs for chronic conditions to obtain them from particular suppliers, like a mail-order pharmacy. Plans and insurers have turned to PBMs for assistance in establishing formularies, negotiating prices with manufacturers and pharmacies, processing beneficiaries’ claims, and reviewing drug utilization. Because PBMs manage drug benefits for multiple purchasers, they often may have more leverage than individual plans in negotiating prices through their greater purchasing power. Traditional fee-for-service Medicare has generally established reimbursement rates for services like those provided by physicians and hospitals and then processed and paid claims with few utilization controls. Adopting some of the techniques used by private plans and insurers might help better control costs. However, how to adapt those techniques to the characteristics and size of the Medicare program raises questions. Negotiated or competitively determined prices would be superior to administered prices only if Medicare could employ some of the utilization controls that come from having a formulary and differential beneficiary cost-sharing. In this manner, Medicare would be able to negotiate significantly discounted prices by promising to deliver a larger market share for a manufacturer’s product. Manufacturers would have no incentive to offer a deep discount if all drugs in a therapeutic class were covered on the same terms. Without a promised share of the Medicare market, these manufacturers might reap greater returns from charging higher prices and by concentrating marketing efforts on physicians and consumers to influence prescribing patterns. Implementing a formulary and other utilization controls could prove difficult for Medicare. Developing a formulary involves determining which drugs are therapeutically equivalent so that several from each class can be included. Plans and PBMs currently make those determinations privately—something that would not be possible for Medicare, which must have transparent policies that are determined openly. Given the stakes involved in selecting drugs, one can imagine the intensive efforts to offer input to and scrutinize the selection process. operated in each area, beneficiaries could choose one to administer their drug benefit. This raises questions about how to inform beneficiaries of the differences in each PBM’s policies and whether and how to risk-adjust payments to PBMs for differences in the health status of the beneficiaries using them. Another option before the Congress would allow Medicare beneficiaries to purchase prescription drugs at the lowest price paid by the federal government. Because of their large purchasing power, federal agencies, such as, the Departments of Veterans Affairs (VA) and Defense (DOD), have access to prescription drug prices that often are considerably lower than retail prices. Extending these discounts to Medicare beneficiaries, or some groups of beneficiaries, could have a measurable effect on lowering their out-of-pocket spending, although whether this would adequately increase access or raise prices paid by other purchasers that negotiate drug discounts is unknown. Typically, federal agencies obtain prescription drugs at prices listed in the federal supply schedule (FSS) for pharmaceuticals.FSS prices represent a significant discount off the prices drug manufacturers charge wholesalers.Under the Veterans Health Care Act of 1992, drug manufacturers must make their brand-named drugs available to federal agencies at the FSS price in order to participate in the Medicaid program. The act requires that the FSS price for VA, DOD, the Public Health Service, and the Coast Guard be at least 24 percent below the price that the manufacturers charge wholesalers. competitive basis for specific drugs considered therapeutically interchangeable. These contracts enable VA to obtain larger discounts from manufacturers by channeling greater volume to certain pharmaceutical products. Providing Medicare beneficiaries access to the lowest federal prices could result in important out-of-pocket savings to those without coverage who are paying close to retail prices. However, concerns exist that extending federal discounts to Medicare beneficiaries could lead to price increases to federal agencies and other purchasers since the discount is based on prices determined by manufacturers. Federal efforts to lower Medicaid drug prices demonstrate the potential for this to occur. While it is not possible to predict how federal drug prices would change if Medicare beneficiaries are given access to them, the larger the market that seeks to take advantage of these prices, the greater the economic incentive would be for drug manufacturers to raise federal prices to limit the impact of giving lower prices to more purchasers. The current Medicare program, without improvements, is ill suited to serve future generations of seniors and eligible disabled Americans. On the one hand, the program is fiscally unsustainable in its present form, as the disparity between program expenditures and program revenues is expected to widen dramatically in the coming years. On the other hand, Medicare’s benefit package contains gaps in desired coverage, most notably the lack of outpatient prescription drug coverage, compared with private employer coverage. Any option to modernize the benefits runs the risk of exacerbating the fiscal imbalance of the programs. That is why we believe that expansions should be made in the context of overall program reforms that are designed to make the program more sustainable over the long term. Any discussions about expanding beneficiary access to prescription drugs should carefully consider targeting financial help to those most in need and minimizing the substitution of public funds for private funds. Employers that offer drug coverage through a retiree health plan may choose to adapt their health coverage if a Medicare drug benefit is available. A key characteristic of America’s voluntary, employer-based system of health insurance is an employer’s freedom to modify the conditions of coverage or to terminate benefits. device. It allows the government to track the extent to which earmarked payroll taxes cover Medicare’s HI outlays. In serving the tracking purpose, the 1999 Trustees’ annual report showed that Medicare’s HI component has been, on a cash basis, in the red since 1992, and in fiscal year 1998, earmarked payroll taxes covered only 89 percent of HI spending. In the Trustees’ report, issued in March 1999, projected continued cash deficits for the HI trust fund. (See fig. 3.) F u n d B a la n c e When the program has a cash deficit, as it did from 1992 through 1998, Medicare is a net claimant on the Treasury—a threshold that Social Security is not currently expected to reach until 2014. To finance these cash deficits, Medicare drew on its special issue Treasury securities acquired during the years when the program generates a cash surplus. In essence, for Medicare to “redeem” its securities, the government must raise taxes, cut spending for other programs, or reduce the projected surplus. Outlays for Medicare services covered under Supplementary Medical Insurance (SMI)–physician and outpatient hospital services, diagnostic tests, and certain other medical services and supplies–are already funded largely through general revenues. Although the Office of Management and Budget (OMB) has recently reported a $12 billion cash surplus for the HI program in fiscal year 1999 due to lower than expected program outlays, the long-term financial outlook for Medicare is expected to deteriorate. Medicare’s rolls are expanding and are projected to increase rapidly with the retirement of the baby boomers. Today’s elderly make up about 13 percent of the total population; by 2030, they will comprise 20 percent as the baby boom generation ages and the ratio of workers to retirees declines from 3.4 to 1 today to roughly 2 to 1. Without meaningful reform, the long-term financial outlook for Medicare is bleak. Together, Medicare’s HI and SMI expenditures are expected to increase dramatically, rising from about 12 percent in 1999 to about a quarter of all federal revenues by mid-century. Over the same time frame, Medicare’s expenditures are expected to double as a share of the economy, from 2.5 to 5.3 percent, as shown in figure 4. The progressive absorption of a greater share of the nation’s resources for health care, like Social Security, is in part a reflection of the rising share of elderly population, but Medicare growth rates also reflect the escalation of health care costs at rates well exceeding general rates of inflation. Increases in the number and quality of health care services have been fueled by the explosive growth of medical technology. Moreover, the actual costs of health care consumption are not transparent. Third-party payers generally insulate consumers from the cost of health care decisions. In traditional Medicare, for example, the impact of the cost- sharing provisions designed to curb the use of services is muted because about 80 percent of beneficiaries have some form of supplemental health care coverage (such as Medigap insurance) that pays these costs. For these reasons, among others, Medicare represents a much greater and more complex fiscal challenge than even Social Security over the longer term. When viewed from the perspective of the entire budget and the economy, the growth in Medicare spending will become progressively unsustainable over the longer term. Our updated budget simulations show that to move into the future without making changes in the Social Security, Medicare, and Medicaid programs is to envision a very different role for the federal government. Assuming, for example, that the Congress and the President adhere to the often-stated goal of saving the Social Security surpluses, our long-term model shows a world by 2030 in which Social Security, Medicare, and Medicaid increasingly absorb available revenues within the federal budget. Under this scenario, these programs would absorb more than three-quarters of total federal revenue. (See fig. 5.) Budgetary flexibility would be drastically constrained and little room would be left for programs for national defense, the young, infrastructure, and law enforcement. Prescription Drugs: Increasing Medicare Beneficiary Access and Related Implications *The “eliminate non-Social Security surpluses” simulation can only be run through 2066 due to the elimination of the capital stock. Revenue as a share of GDP during the simulation period is lower than the 1999 level due to unspecified permanent policy actions that reduce revenue and increase spending to eliminate the non-Social Security surpluses. Medicare expenditure projections follow the Trustees’ 1999 intermediate assumptions. The projections reflect the current benefit and financing structure. term. Assuming no other changes, these programs would constitute an unimaginable drain on the earnings of our future workers. actuarial balance to the HI trust fund. This analysis, moreover, does not incorporate the financing challenges associated with the SMI and Medicaid programs. Early action to address the structural imbalances in Medicare is critical. First, ample time is required to phase in the reforms needed to put this program on a more sustainable footing before the baby boomers retire. Second, timely action to bring costs down pays large fiscal dividends for the program and the budget. The high projected growth of Medicare in the coming years means that the earlier the reform begins, the greater the savings will be as a result of the effects of compounding. The actions necessary to bring about a more sustainable program will no doubt call for some hard choices. Some suggest that the size of the imbalances between Medicare’s outlays and payroll tax revenues for the HI program may well justify the need for additional resources. One possible source could be general revenues. Although this may eventually prove necessary, such additional financing should be considered as part of a broader initiative to ensure the program’s long-range financial integrity and sustainability. What concerns us most is that devoting general funds to the HI trust fund may be used to extend HI’s solvency without addressing the hard choices needed to make the whole Medicare program more sustainable in economic or budgetary terms. Increasing the HI trust fund balance alone, without underlying program reform, does nothing to make the Medicare program more sustainable—that is, it does not reduce the program’s projected share of GDP or the federal budget. From a macroeconomic perspective, the critical question is not how much a trust fund has in assets but whether the government as a whole has the economic capacity to finance all Medicare’s promised benefits—both now and in the future. We must keep in mind the unprecedented challenge facing future generations in our aging society. Relieving them of some of the financial burden of today’s commitments would help preserve some budgetary flexibility for future generations to make their own choices. portion of Medicare, which is projected to grow even faster than HI in coming decades, assuming no additional SMI benefits. The issue of the extent to which general funds are an appropriate financing mechanism for the Medicare program would remain important under financing arrangements that differed from those in place in the current HI and SMI structures. For example, under approaches that would combine the two trust funds, a continued need would exist for measures of program sustainability that would signal potential future fiscal imbalance. Such measures might include the percentage of program funding provided by general revenues, the percentage of total federal revenues or gross domestic product devoted to Medicare, or program spending per enrollee. As such measures were developed, questions would need to be asked about the appropriate level of general revenue funding. Regardless of the measure chosen, the real question would be what actions should be taken when and if the chosen cap is reached. Beyond reforming the Medicare program itself, maintaining an overall sustainable fiscal policy and strong economy is vital to enhancing our nation’s future capacity to afford paying benefits in the face of an aging society. Decisions on how we use today’s surpluses can have wide-ranging impacts on our ability to afford tomorrow’s commitments. As we know, there have been a variety of proposals to use the surpluses for purposes other than debt reduction. Although these proposals have various pros and cons, we need to be mindful of the risk associated with using projected surpluses to finance permanent future claims on the budget, whether they are on the spending or the tax side. Commitments often prove to be permanent, while projected surpluses can be fleeting. For instance, current projections assume full compliance with tight discretionary spending caps. Moreover, relatively small changes in economic assumptions can lead to very large changes in the fiscal outlook, especially when carried out over a decade. In its January 2000 report,CBO compared the actual deficits or surpluses for 1986 through 1999 with the first projection it had produced 5 years before the start of each fiscal year. Excluding the estimated impact of legislation, CBO stated that its errors in projecting the federal surplus or deficit averaged about 2.4 percent of GDP in the fifth year beyond the current year. For example, such a shift in 2005 would mean a potential swing of about $285 billion in the projected surplus for that year. Although most would not argue for devoting 100 percent of the surplus to debt reduction over the next 10 years, saving a good portion of our surpluses would yield fiscal and economic dividends as the nation faces the challenges of financing an aging society. Our work on the long-term budget outlook illustrates the benefits of maintaining surpluses for debt reduction. Reducing the publicly held debt reduces interest costs, freeing up budgetary resources for other programmatic priorities. For the economy, running surpluses and reducing debt increase national saving and free up resources for private investment. These results, in turn, lead to stronger economic growth and higher incomes over the long term. Over the last several years, our simulations illustrate the long-term economic consequences flowing from different fiscal policy paths.Our models consistently show that saving all or a major share of projected budget surpluses ultimately leads to demonstrable gains in GDP per capita. Over a 50-year period, GDP per capita is estimated to more than double from present levels by saving all or most of projected surpluses, while incomes would eventually fall if we failed to sustain any of the surplus. Although rising productivity and living standards are always important, they are especially critical for the 21st century, for they will increase the economic capacity of the projected smaller workforce to finance future government programs along with the obligations and commitments for the baby boomers’ retirement. Updating the Medicare benefit package may be a necessary part of any realistic reform program to address the legitimate expectations of an aging society for health care, both now and in the future. Expanding access to prescription drugs could ease the significant financial burden some Medicare beneficiaries face because of outpatient drug costs. Such changes, however, need to be considered as part of a broader initiative to address Medicare’s current fiscal imbalance and promote the program’s longer-term sustainability. Balancing these competing concerns may require the best from government-run programs and private sector efforts to modernize Medicare for the future. Further, the Congress should consider adequate fiscal incentives to control costs and a targeting strategy in connection with any proposal to provide new benefits such as prescription drugs. expectation and the future projected growth of the program, some additional revenue sources may in fact be a necessary component of Medicare reform. However, it is essential that we not take our eye off the ball. The most critical issue facing Medicare is the need to ensure the program’s long range financial integrity and sustainability. The 1999 annual reports of the Medicare Trustees project that program costs will continue to grow faster than the rest of the economy. Care must be taken to ensure that any potential expansion of the program be balanced with other programmatic reforms so that we do not worsen Medicare’s existing financial imbalances. Current budget surpluses represent both an opportunity and an obligation. We have an opportunity to use our unprecedented economic wealth and fiscal good fortune to address today’s needs but an obligation to do so in a way that improves the prospects for future generations. This generation has a stewardship responsibility to future generations to reduce the debt burden they will inherit, to provide a strong foundation for future economic growth, and to ensure that future commitments are both adequate and affordable. Prudence requires making the tough choices today while the economy is healthy and the workforce is relatively large. National saving pays future dividends over the long term, but only if meaningful reform begins soon. Entitlement reform is best done with considerable lead-time to phase in changes and before the changes that are needed become dramatic and disruptive. The prudent use of the nation’s current and projected budget surpluses combined with meaningful Medicare and Social Security program reforms can help achieve both of these goals. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions you or other Subcommittee Members may have. For future contacts regarding this testimony, please call Paul L. Posner, Director, Budget Issues, at (202) 512-9573 or William J. Scanlon, Director, Health Financing and Public Health Issues at (202) 512-7114. Other individuals who made key contributions include Linda F. Baker, Laura A. Dummit, John C. Hansen, Tricia A. Spellman, and James R. McTigue. (201033/935352) | Pursuant to a congressional request, GAO discussed options for increasing Medicare beneficiaries' access to prescription drugs, focusing on the: (1) factors contributing to the growth in prescription drug spending and efforts to control that growth; and (2) design and implementation issues to be considered regarding proposals to improve seniors' access to affordable prescription drugs. GAO noted that: (1) the Medicare benefit package provides virtually no coverage; (2) in 1996, almost one third of beneficiaries had employer-sponsored health coverage, as retirees, that included drug benefits; (3) more than 10 percent of beneficiaries received coverage through Medicaid or other public programs; (4) to protect themselves against drug costs, the remainder of Medicare beneficiaries can choose to enroll in a Medicare Choice plan with drug coverage or purchase a Medigap policy; (5) however, the availability, breadth, and price of such coverage is changing as the costs of expanded prescription drug use drives employers, insurers, and managed care plans to adopt new approaches to control the expenditures for this benefit; (6) over the past 5 years, prescription drug expenditures have grown substantially, both in total and as a share of all health care outlays; (7) prescription drug spending grew an average of 12.4 percent per year from 1993 to 1998, compared with a 5 percent annual growth rate for health care expenditures overall; (8) total drug expenditures have been driven up by the following factors: (a) both greater utilization of drugs and the substitution of higher-priced new drugs for lower-priced existing drugs; (b) private insurance coverage for drugs; (c) biotechnology advances and a growing knowledge of the human immune system; and (d) advertising of drugs; (9) all of these factors suggest the need for effective cost control mechanisms; (10) a common technique to manage pharmacy care and control costs is to use a formulary, which can affect how frequently a drug is prescribed and purchased and, therefore, can affect its market share; (11) another way in which the market has been transformed is through the use of pharmacy benefit managers by health plans and insurers to administer and manage prescription drug benefits; (12) expanding access to more affordable prescription drugs could involve either subsidizing prescription drug coverage or allowing beneficiaries access to discounted pharmaceutical prices; (13) the design of a drug coverage option, as well as its implementation, will determine the effect of the option on beneficiaries, Medicare or federal spending, and the pharmaceutical market; (14) a new benefit would need to be crafted to balance competing concerns about the sustainability of Medicare, federal obligations, and the hardship faced by some beneficiaries; and (15) the effect of granting some beneficiaries access to discounted prices will hinge on details such as the price of the drugs after the discount, how discounts are determined and secured, and which beneficiaries are eligible. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
About one-third of all land in the United States is federally owned and consists largely of forests, grasslands, and other vegetated lands. Over the years, underbrush has grown substantially on these lands, and along with recent drought conditions and disease infestation, has fueled larger and more intense wildfires. Further, there has been an increase in the number and size of communities that border these areasin what is known as the wildland urban interface. Suppressing wildfires that threaten these areas costs significantly more because protecting homes and other structures is costly. In 2000 and 2002, wildfires burned nearly 8.5 million and 7 million acres, respectively; and in 2003, wildfires burned about another 4 million acres. In both 2000 and 2002, suppression costs were over $1.4 billion each year; in 2003, suppression costs nearly reached that amount. Because suppression costs have exceeded appropriated funds, the agencies have had to transfer funds from other programs to supplement their suppression funds. Two years in advance of when funds are appropriated, the Forest Service and Interior develop budget requests by estimating the annual costs to suppress wildfires. Estimating these costs is inherently difficult because of the unpredictable nature of wildfires, including where they will occur, how intense they will be, and how quickly they will spread. As a result, these estimates, at times, result in funding for wildfire suppression that is insufficient to cover actual suppression costs. Historically, the Forest Service and Interior have used a 10-year rolling average of suppression expenditures as the foundation for their suppression budget requests. During each year’s fire season, the Forest Service and Interior also develop monthly forecasts to update the overall suppression costs estimate and determine how much additional funding, if any, will be needed. When it becomes apparent that annual appropriated funds are insufficient to support forecasted suppression needs, the Forest Service and Interior are authorized to use funds from other programs within their agency to pay for emergency firefighting activities. From 1999 through 2003, the Forest Service and Interior transferred over $2.7 billion from various agency programs to help fund wildfire suppression when appropriated funds were insufficient. The Forest Service transferred monies from numerous programs supporting the breadth of its activities, while Interior transferred funds primarily from two programsconstruction and land acquisition. To determine the amount of funds to transfer, the agencies used similar monthly forecasting models to determine suppression funding needs during the fire seasons. Agency officials acknowledged, however, that the models produced widely varying forecasts of suppression costs that substantially underestimated actual costs. Also, in determining the programs from which to transfer funds, the agencies attempted to select programs with projects that would not be significantly impacted by transfers because a portion of their funds would not be needed until subsequent years. Between 1999 and 2003, the Congress reimbursed the agencies for about 80 percent of the funds that were transferred on average. However, the Congress did not always reimburse the programs in amounts proportionate to the transfers. In addition, the Forest Service and Interior had some discretion in distributing the reimbursements among various projects, depending on their priorities at the time of reimbursement. For each of the last 5 years, wildfire suppression costs have been substantially greater than the amount of funds appropriated for suppression, necessitating the Forest Service and Interior to transfer over $2.7 billion from other agency programs to help fund wildfire suppression activities. Of this amount, the Forest Service transferred the majority— almost $2.2 billion—while Interior transferred over $500 million. Nearly half of the total amount was transferred in 1 year alone, 2002, but substantial transfers were needed for other recent severe fire seasons as well. For example, during the 2000 and 2003 fire seasons, almost $400 million and about $870 million were transferred, respectively. As illustrated in figure 1, suppression costs have exceeded suppression appropriations almost every year since 1990. To determine the amount of funds to transfer each year, the agencies used monthly forecasting models to estimate likely wildfire suppression costs during the wildfire season. Agency officials acknowledged, however, that the models produced forecasts of suppression costs that varied by hundreds of millions of dollars when compared with actual, year-end suppression costs. For example, in June 2003, Interior’s forecasting model predicted that suppression costs for the year would exceed suppression appropriations by about $72 million. A month later, the model predicted costs would exceed appropriations by about $56 million; by late August, the model predicted that costs would exceed appropriations by more than $100 million. By the end of the fiscal year, Interior had transferred over $175 million to cover actual suppression costs. Forest Service forecasts also were well short of year-end suppression costs during 2003. The agency’s forecasting model predicted that annual suppression costs would reach nearly $800 million, indicating that current year funds would be about $375 million less than projected needs. By the end of the fiscal year, however, the Forest Service had transferred nearly $700 million to cover suppression costs of over $1 billion. Both Forest Service and Interior officials indicated there is a high degree of uncertainty in trying to estimate the current year’s suppression costs, primarily because weather conditions are difficult to predicteven over the short term. Despite the discrepancies between agency forecasts and actual suppression costs, the agencies have performed no formal assessments of their forecasting models’ accuracy. Agency officials acknowledged that such assessments would be useful for monitoring and improving the reliability of their models and enhancing their ability to predict when transfers will be needed and how much to transfer. The agencies also acknowledged that the forecasts have not been accurate and are revising the models in an effort to improve the forecasts. In deciding the programs from which to transfer funds, Interior and Forest Service officials primarily selected programs with projects that would not be significantly impacted by transfers because a portion of their funds would not be needed until subsequent years. Interior transferred funds mostly from its construction and land acquisition programs, with about two-thirds of the funds coming from construction. These two programs are used to construct and maintain facilities, roads, and trails on Interior lands, among other things, and to acquire additional public lands. In 2002 and 2003, Interior also transferred some funds from fire-related preparedness, postfire rehabilitation, and hazardous fuels reduction projects in order to support suppression activities. Within Interior, the National Park Service transferred substantially more funds than the other three agencies over the last 5 years, transferring about 60 percent of the $540 million transferred. Unlike Interior, the Forest Service transferred monies from numerous programs supporting the breadth of its activities. These programs included its construction; land acquisition; national forest system, which among other things conducts postwildfire rehabilitation and restoration work; and state and private forestry programs, which support activities such as grants to states, tribes, communities, and private landowners for fire management, urban forestry, and natural resource education as well as insect suppression. Before 2001, the Forest Service had transferred funds solely from its Knutson-Vandenberg Fund (K-V Fund), because historically this restoration program had large amounts of money that could not be used by the end of the fiscal year. Since the mid-1980s, the Forest Service has transferred more than $2.3 billion from this program; however, more than $400 million has not been reimbursed. As a result, the Forest Service became concerned about the viability of the K-V Fund as a source of transfers and in 2001 began transferring funds from other major Forest Service programs. The Forest Service and Interior programs from which funds were transferred and the amount of funds transferred and reimbursed from 1999 through 2003 are outlined in table 1. Additional details on these matters are included in appendixes II and III. Over the last 5 years, the Congress reimbursed, on average, about 80 percent of the funds that the Forest Service and Interior transferred for wildfire suppression expenses. Although the agencies received nearly full reimbursement for funds transferred in 2000 and 2001, the Forest Service and Interior were reimbursed about 74 percent and about 81 percent, respectively, of the funds transferred in 2002 and 2003. For these later 2 years, individual Forest Service programs were reimbursed at varying rates. For example, the Congress reimbursed the Forest Service’s state and private forestry program for nearly 100 percent and its national forest system program for 40 percent of the funds transferred in 2002. In contrast, the Congress reimbursed Interior’s construction and land acquisition programs at about 81 percent each. Congressional appropriators and Office of Management and Budget (OMB) officials worked with Forest Service and Interior officials to determine the amount of funds to reimburse to the numerous agency programs impacted by funding transfers in order to help the agencies meet their current program needs. For example, according to Forest Service officials, state and private forestry projects, such as community assistance grants and forest legacy project grants, were important priorities when the Forest Service received reimbursements in 2003 for funds transferred in 2002. As a result, the state and private forestry program received full reimbursement. In contrast, the national forest system had a large amount of funds transferred in 2002 that was dedicated for the salaries of staff diverted from their normal duties to fight wildfires. OMB officials indicated that since these employees had been paid—albeit out of the wildfire suppression account—the transferred salaries required no reimbursement. Therefore, the national forest system program was reimbursed for a much smaller amount—about 40 percent. When the Forest Service and Interior received less than full reimbursement for funds transferred in 2002 and 2003, the agencies used different procedures to distribute the reimbursed funds within their programs. Forest Service officials distributed the funds to projects reflecting current priorities within individual programs, which were not necessarily the same projects from which funds were transferred. For example, Forest Service officials in California targeted the funds to projects within the San Bernardino National Forest to help address the increased wildfire risk created by insect infestation, even though no funds had been transferred from these projects. Similarly, officials in Colorado directed the reimbursed funds to high-priority rehabilitation efforts in the aftermath of the Hayman fire that had occurred in the Pike-San Isabel National Forest in June 2002. In contrast to the Forest Service’s approach, Interior reimbursed funds solely to projects from which funds were transferred; however, the four agencies within Interior did this in varying ways. For example, the Fish and Wildlife Service fully repaid its high-priority construction projects for transfers made in 2002, although it did not repay lower priority construction projects that also transferred funds, such as the restoration of a visitors center. The Bureau of Indian Affairs, on the other hand, fully repaid all projects from which funds were transferred in 2002, except one—a school renovation project that agency officials believed could be delayed pending future additional funding. In contrast, the Bureau of Land Management repaid all construction projects at the same percentage, while the National Park Service repaid construction projects at widely varying amounts depending on their perceived priorities. Table I provides information on the amount of funds reimbursed to the various Forest Service and Interior programs. Additional details on reimbursements are provided in appendix II. The Forest Service and Interior canceled or delayed numerous projects, failed to fulfill certain commitments to partners, and faced difficulties in managing their programs when funds were transferred for fire suppression. The agencies canceled or delayed contracts, grants, and other activities, which in some cases increased the costs and time needed to complete projects. Further, agency relationships with state agencies, nonprofit organizations, and communities became strained when the agencies could not fulfill commitments, such as awarding grants on time. In addition, transfers disrupted agency efforts to effectively manage programs, causing planned activities to go unfunded and, in some cases, causing program funds to be depleted or overspent. If transfers continue, the impacts on projects, relationships, and program management will likely continue and increase. Although Forest Service and Interior local units generally are aware of these impacts, the agencies have no systems in place to track the impacts at a national level. (Dollars noted in the remaining text of this section are not adjusted for inflation.) Projects in a variety of Forest Service and Interior programs were delayed or canceled as a result of funding transfers, thereby affecting agency firefighting capabilities, construction and land acquisition goals, state and community programs, and other resource management programs. Furthermore, officials often had to duplicate their efforts because of transfers, which prolonged delays and added costs. For example, officials had to revise budgets and construction plans, update cost estimates, and rewrite land acquisition documents when delays caused them to become outdated, all of which further compounded project delays. In some cases, preparation of such documents added substantial costs. For example, appraisal and legal fees for certain land acquisition efforts added thousands of dollars to project costs. In addition, when delays were prolonged, supply costs increased, land prices rose, and impacts to natural resources spread, which also increased projects’ costs. Although funding transfers were intended to aid fire suppression, in some cases, the Forest Service and Interior delayed projects that were intended to reduce fire risk or improve agency firefighting capabilities. Following are examples of such projects: Fuels reduction projects, New Mexico: In 2003, $191,000 was transferred from three fuels reduction projects covering 480 acres of Forest Service land in the wildland urban interface. All three thinning projects were near communities, affecting about 325 homes. The projects are scheduled to be completed in 2004. National Park Service fire facilities projects, nationwide: In 2002, about $3.4 million was transferred from 13 fire facilities projects at 10 different parks. The projectsincluding construction of facilities for fire equipment storage; a crew dormitory; and fire engine storage buildings, among otherswere delayed for several months. Four of these projects—in Big Bend National Park (Texas); Yellowstone National Park (Idaho, Montana, and Wyoming); Sequoia and Kings Canyon National Parks (California); and Shenandoah National Park (Virginia)were again delayed in 2003 when about $1.9 million was transferred. Forest Service fire facilities projects, California: In 2003, the Forest Service deferred construction of two engine bays, one fire station, and three fire barracks in California because of funding transfers. Consequently, fire crews at one forest must live in housing that, according to agency officials, is substandard and has required recurring maintenance to address roof leaks, plumbing malfunctions, and electrical failures caused by rodents damaging the wires. Additionally, officials told us that such conditions make it difficult to recruit and retain fire crews. Wildfire management courses, southern region: In 2003, the Forest Service canceled two required training courses for officials who approve wildfire management decisions and expenses. About 80 officials who represent national forests in at least 12 states had planned to attend. One course emphasized cost containment, and the other covered a wide range of fire management issues, including safety. Both courses were rescheduled and held in 2004. Fire research projects, Montana: When funds were transferred in 2002, the LANDFIRE project—a multiagency effort to collect comprehensive data on fire risk—was delayed about 3 months, the collection of data critical for modeling fire behavior was delayed about 6 months, and data on smoke levels were lost because an instrument was not purchased in time to use it during the 2002 fire season. In addition, temporary staff were released early in 2002, further reducing the amount of research that could be performed. Agency officials also targeted construction and land acquisition programs for funding transfers because these projects are often funded one year, with the expectation that the project will be implemented—and the funds spent—over several years. Consequently, these programs often have large unused fund balances, and transfers can sometimes be made with minimal impact as long as the funds are reimbursed before they are needed. Accordingly, some officials, especially in the Interior agencies, told us that impacts to projects had been relatively limited. Nevertheless, many construction and land acquisition projects were delayed or canceled, particularly in the Forest Service. Some construction projects that were delayed due to funding transfers were delayed for 1 year or more because of seasonal requirements, even when funds were reimbursed after only a few months. For example, a project to replace three backcountry bridges at the Inyo National Forest in California was planned for late summer when stream flows would be low and conditions would be safe for workers. According to a Forest Service official, the project was important for public safety because one bridge was completely washed out and the other two bridges were at risk of failing while people were crossing them. Figure 2 shows one of these bridges before—when handrails were sagging or missing and support logs were decayingand after it was replaced. Project funds were transferred in 2002, so the project was deferred to late summer 2003; however, funds were once again transferred, and the project was not completed until 2004. In other cases, additional adverse effects resulted when projects with seasonal requirements were delayed. For example, according to a Forest Service official, a popular campground in Arizona may be closed during the 2004 operating season while improvements are made because seasonal requirements combined with fire transfers resulted in extended delays. In 2003, $450,000 was transferred from this project, delaying it 2 months into the winter. Because of the weather construction crews could not work on the project, thus it was delayed several additional months. Further, this campground was already closed during the 2003 operating season because funding transfers in 2002 had delayed planned improvements. In some cases, construction projects that were initially delayed were canceled when supply costs rose and the Forest Service no longer had sufficient funds to pay for the projects. For example, a 2003 project to rehabilitate a historic residence at the Sierra National Forest in California was delayed when funds were transferred for wildfire suppression. According to an agency official, the project, which would have converted the residence into a public information facility, was intended to attract tourists and help diversify the local economy in an area where a 1994 lumber mill closure contributed to a deteriorating economy. The lowest bid that the Forest Service received for the project was about $186,000. However, before the funds were reimbursed, the contractorciting a 300 percent increase in lumber pricesrescinded the bid and estimated the new cost of the project at $280,000, an increase of nearly $100,000. Consequently, the Forest Service canceled the project and resubmitted it in its 2005 budget, with a higher cost estimate. Land acquisition costs can also increase when projects are delayed. For example, figure 3 shows a portion of a 65-acre property in Arizona that the Forest Service intended to purchase for approximately $3.2 million in 2002, but had to defer due to funding transfers. About a year later, the Forest Service purchased the property, but the value had increased, costing about $195,000 more than it had a year earlier. A nonprofit organization also incurred additional costs of about $3,000 because it paid for the updated appraisal. In addition, the agencies sometimes risked losing the opportunity to purchase land when funds were transferred from land acquisition programs. For example, in 2003, the Fish and Wildlife Service planned to purchase property in Alabama that contains habitat for the gopher tortoise, which is a species of concern in Alabama. However, because of funding transfers and only partial reimbursement, the service no longer had sufficient funds. Agency officials were concerned that the property would be sold privately. To prevent a sale to private owners, a nonprofit organization agreed to buy the property and hold it until the Fish and Wildlife Service could purchase it from them. When funds were transferred for fire suppression, many Forest Service grants were delayed or canceled, which affected states, communities, nonprofit organizations, and others. Examples of such projects are discussed below: Urban and community forestry grants in seven states, southern region: The Forest Service did not fund eight urban and community forestry grants totaling $993,000 due to 2003 funding transfers. State forestry departments planned to “subgrant” about 80 percent of the funds to local communities for more than 75 projects, such as planting trees, developing local land use plans, and holding several workshops and conferences on topics such as urban forestry. Community assistance grant, New Mexico: A 2002 grant to a small business owner was delayed about 6 months because of funding transfers. The business processes small-diameter wood to make signs and other marketable products, and the grant would have paid for a wood chipper essential to the process. When the grant was delayed, the business owner could not purchase the chipper and process the wood. As a result, he closed his business for a year, laid off some staff, and reported estimated revenue losses of millions of dollars. Watershed education grant, New Mexico: A 2003 economic action grant for $32,000 was canceled and will not be funded. The grant would have paid for a nonprofit organization to conduct an education project about sustainable grazing in a severely degraded watershed where the intended audience included ranchers, community members, public officials, and others. The nonprofit organization reported investing about $5,250 in preparation for the project. When resource management projects were delayed and canceled, natural resources were affected (e.g., soils eroded, insects infested forests, and encroaching plants spread and threatened newly planted trees). Further, prolonged delays sometimes compounded these effects because additional time allowed the damage to spread. For example, at the Lincoln National Forest in New Mexico, a project to repair a washout in a road was deferred when funds were transferred in 2002. During a 2-year delay that was partially caused by funding transfers, the washout grew dramatically. Consequently, a more significant structure is now needed to prevent erosion, which will result in additional costs of between $9,000 and $15,000, according to an agency official. Additionally, at the White River National Forest in Colorado in 2003, $111,000 was transferred from a project to remove about 150 acres of trees infested with spruce beetlethereby deferring the project. As a result, the infestation grew to about 230 acres, killing additional trees and raising the cost of the project about $24,000 more than previously estimated, according to an agency official. Further, there is a chance that the beetle population will spread to the point where it cannot be contained at any cost and where tree mortality will increase dramatically—affecting up to 6,000 acres. If this further infestation occurs, an agency official said the project would be canceled. According to an official at the Bitterroot National Forest in Montana, a project to stabilize 9 miles of a dirt road was delayed when about $1.2 million was transferred in 2002. As shown in figure 4, the road was collapsing. As a result, sediment was running into a creek, jeopardizing the habitat of two species of fish, one of which is a threatened species. Two years after the transfer, $430,000 was reimbursed to the project, and officials expect to stabilize about 2 of the 9 miles of road. Because of the prolonged delay, however, additional sediment has run into the stream and further compromised the fish habitat. Furthermore, agency officials do not expect to receive any additional reimbursement to complete the remaining stabilization, and they are concerned about the increasing sedimentation and continuing decline of the fish habitat. In addition, sometimes canceling one project affects the success of others. For example, at the Hiawatha National Forest in Michigan, a project intended to ensure the success of reforestation efforts—by removing encroaching plants—will be canceled in 2004. The encroaching plants are crowding newly planted trees, as shown in the photograph on the left in figure 5, and threatening their survival, according to agency officials. As a result, one official estimated that 20 to 25 percent of the newly planted trees will die, and that it will cost about $24,000 to remove the dead trees and reforest the area. In contrast, the photograph on the right shows a site where young trees were protected by removing encroaching plants, and, consequently, the trees survived. Examples such as these were widespread in the six regions we visited. For example, because of funding transfers in 2002 and 2003, the Forest Service’s northern region deferred reforestation on 5,900 acres, weed control on 74,000 acres, maintenance on 1,500 miles of road, replacement of 150 culverts to improve fish habitat, repair of five damaged bridges, and award of 11 stewardship contracts. When the Forest Service and Interior transferred funds for fire suppression, they sometimes failed to fulfill commitments to partners, which caused relationships to be strained. Federal agencies rely on partnerships and other forms of collaboration with each other, state and local governments, nonprofit organizations, and others to accomplish their work. For example, federal land acquisitions are often facilitated by nonprofit organizations and involve private landowners, agency recreation programs depend on volunteers, and some research projects are joint efforts between the Forest Service and Interior and may involve university participants as well. In addition, communities, state forestry programs, and others depend on federal grant programs for financial support. When funds were transferred for fire suppression, not only were federal programs impacted, but nonprofit organizations, states, and communities were also affected. In transferring funds from land acquisition programs, agency relationships with nonprofit organizations were affected. Nonprofit organizations often facilitate agency land acquisitions by negotiating with landowners and by sometimes purchasing the land, then selling it to the agency. When agencies delayed land acquisitions, nonprofit organizations sometimes incurred interest costs of thousands of dollars on loans they took out for the purchase of the land. These costs were generally absorbed by the nonprofit organization and not passed on to the federal agencies. For example, one organization bought a parcel of land in South Carolina with the intent of selling it to the Forest Service in 2002; however, the funds to purchase the land were transferred for wildfire suppression. The Forest Service eventually purchased the land in 2003, but in the meantime, the nonprofit organization had incurred about $300,000 in interest costs. One nonprofit organization reported that 22 land acquisition projects were delayed in 2002, and 21 projects in 2003, due to transfers. A representative from the organization said that if funds are again transferred in 2004, the organization will view this practice as a trend, rather than an anomaly, and will likely invest its funds elsewhere rather than work with the Forest Service and Interior. Agency relationships with landowners were affected as well. For example, the Forest Service has been working for several years with state officials and others to obtain a conservation easement in Hawaii. According to a Forest Service official, it “has been a major effort to build a high enough level of trust with the private landowner.” The official is concerned that transfers—which depleted the necessary funds for this project in both 2002 and 2003—may jeopardize their relationship with the landowner, who may choose to develop the property rather than wait for the Forest Service to secure the necessary funds. If the land is developed, an important habitat for two endangered bird species will be lost. Community groups and volunteer or nonprofit organizations also invest considerable time and money to prepare projects and grant proposals. When the Forest Service and Interior did not fulfill their commitments, some of these investments were lost. For example, a 2002 Collaborative Forest Restoration Program grant in New Mexico would have paid for thinning treatments to be conducted by a local workforce, with the resulting wood chips to be processed into marketable products. A nonprofit organization that was a partner in the project conducted a $30,000 training program to prepare the local workforce. However, the grant was delayed for about 6 months because of 2002 funding transfers, and when funds became available, the trainees were employed elsewhere and unavailable. Another example involves a nonprofit organization that works collaboratively with communities and Forest Service and Interior agencies to design and implement large-scale fire restoration projects across the country. The collaborative teams collectively review the outcomes of projects, such as controlled burns, and share their knowledge and experience with one another. Of the 30 projects that were to receive federal funding, 12 have been delayed as a result of funding transfers. According to a representative of this organization, the practice of transferring funds for wildfire suppression “hurts the credibility of agencies,” and has led two of the project teams to not apply for further funding because of the uncertainty caused by the possibility of transfers. The fire transfers also affect state forestry departments, which depend on Forest Service grants to support their programs. In recent years, state budgets have been strained, making it difficult for state governments to compensate for the loss of federal funding. When the Forest Service began transferring funds for fire suppression in 2002, some states were concerned about the viability of their forestry programs. For example, Forest Service grants supply nearly 60 percent of Nebraska’s annual State Forest Service budget, without which the state would have to significantly reduce its operation—including laying off staff. According to the Nebraska State Forester, when funds were transferred in 2002, the state had already spent over $1 million beyond its existing budget because it anticipated receiving a Forest Service grant. After a period of uncertainty, the grant was awarded. However, the State Forester said that, partly as a result of ongoing budget uncertainties, one staff member left the agency and two candidates declined job offers, leaving another position vacant. States were also affected when, in 2003, $50 million was transferred from the 5-year, $100 million Forest Land Enhancement Program, and only $10 million was reimbursed. This program, which is managed by states, helps private landowners improve the health of their forestlands through activities such as timber improvement, wildlife habitat management, and fuels reduction. The $100 million was intended to last for 5 years. In the first year, the Forest Service allocated $20 million to the states, leaving an $80 million balance in the program. When only $10 million of the $50 million transfer was reimbursed, the program was left with a balance of $40 million—or half of the expected budget—for the remaining 4 years. Foresters are concerned about the viability of the program, which provides an economically feasible alternative to landowners who might otherwise sell their land for development. Further, foresters believe that by preventing development of such land, the program helps avoid habitat fragmentation, which was identified by the Forest Service Chief as one of the four largest threats to the nation’s forests. Nonetheless, with so much of the program’s budget lost to funding transfers and its viability in question, agency officials did not expect to receive any funding for the program in 2004 and did not request any funding for 2005. According to agency officials, the Forest Service will not be able to continue the program unless the Congress appropriates funds for fiscal year 2005 or subsequent years of the authorization period. When funds were transferred for fire suppression, the agencies’ efforts to manage their programs—including budgeting and planning for annual and long-term programs of work—were disrupted. Some programs, such as the Forest Service K-V and Working Capital Funds, are managed like savings accounts, accumulating funds over multiple years to be spent according to a specific schedule for activities such as forest improvement and vehicle maintenance and replacement. When transfers were made from these programs without subsequent reimbursement, agencies had to begin accumulating the funds again or cancel the planned expenses. For other programs, such as construction and land acquisition, transfers interfered with agency and congressional priorities. In some cases, Forest Service programs went into deficit because transfers disrupted planned budgets and officials overspent program funds in order to pay for essential expenses. Actions taken by the agencies may have mitigated some of these impacts, but compounded others. Funding transfers have left the Forest Service with insufficient funds to pay for all of the K-V projects it planned at the time the funds were collected. Over the past 5 years, about $640 million has been transferred from the K-V Fund for wildfire suppression, while only $540 million has been reimbursed. Moreover, transfers have been made from the K-V Fund for decades with only partial reimbursement. Since the mid-1980s, about $2.3 billion has been transferred from the K-V Fund, and only $1.9 billion has been reimbursed. According to agency officials, there have been sufficient funds to fully implement the K-V reforestation projects in any given year. However, there have not always been sufficient funds over the years to implement other programs that rely on the K-V Fund. For example, before reimbursements were received for 2003 transfers, Forest Service officials said they would only be able to fund about $60 million of $96 million in K-V projects for 2004 dealing with activities such as habitat improvement. Even though reimbursements for 2003 transfers were later received, Forest Service officials indicated that many of the habitat improvement projects that had been deferred to absorb the shortfall will not be accomplished in 2004 due to the shortened period of work. Faced with unpredictable information about funding transfers and reimbursements, it has been difficult for the Forest Service to reliably estimate how much will be deposited into and withdrawn from the K-V Fund and, therefore, to effectively manage the fund and the programs it supports. Similarly, transferring funds from the Working Capital Fund disrupted the Forest Service’s efforts to carry out long-term expense planning, making it difficult for agency officials to effectively manage programs. For example, the Forest Service no longer has enough funds to pay for its planned vehicle and computer replacements because of funding transfers. Each program that uses vehicles or computers allocates a portion of its budget to pay monthly charges into the Working Capital Fund, which accrues these deposits over a period of years to spend on vehicle and computer purchases and maintenance. Vehicles and computers are then maintained as needed and replaced according to a schedule designed to maximize cost effectiveness. In 2002 and 2003, however, some of the funds that agency officials had been accumulating for years were transferred and no longer available for maintenance and planned replacements. As a result, maintenance and replacement schedules were disrupted, and purchases had to be delayed. For example, in 2002, the Forest Service postponed planned purchases of fire engines, helitack trucks, fire crew carriers, and patrol rigs when funds were transferred in California. Since more than 90 percent of these transfers were not reimbursed, agency officials had to either continue using older vehicles or reduce their fleet size and will have to make additional payments to accrue enough savings for the planned purchases. Forest Service efforts to prioritize projects were also disrupted. In an attempt to avoid project delays and cancellations after having lost funds to transfers in 2002, agency officials awarded contracts and grants earlier in the year in 2003. Although such efforts mitigated some impacts of funding transfers, they also interfered with agency attempts to implement high- priority projects. When officials expected funds to be transferred, they implemented projects that could be completed quickly and early in the year, although they were not necessarily their highest priority projects. On the other hand, the Forest Service was able to implement some of its high- priority projects later by redirecting reimbursements to them. For example, in California, agency officials targeted funds to the San Bernardino National Forest, where insect infestation had caused widespread tree mortality and elevated fire risk. In Colorado, officials directed reimbursements to high-priority rehabilitation efforts in the aftermath of the Hayman fire, shown in figure 6. The redirection of funds was authorized by the Congress and may have helped preserve agency priorities. However, under some programs, such as construction and land acquisition, appropriations committee reports direct the agencies to fund specific projects (which agency officials refer to as “congressionally directed” projects). In some cases, officials paid for congressionally directed projects by shifting funds from projects that the committee reports had not specifically identified, or projects that were less expensive than anticipated, and therefore had “savings.” However, one National Park Service official expressed concern about these unfunded projects, suggesting that if transfers continue without complete reimbursement, the construction program may no longer have sufficient funds to pay for all congressionally directed projects, even though funds were already appropriated for them. Funding transfers also disrupted annual budgeting efforts, contributing to numerous individual Forest Service programs going into deficit in 2003 when agency officials overspent funds internally set aside for the programs. Forest Service officials attributed the deficits in part to actions they took to execute the transfer of funds—specifically, the combination of spending early and transferring late. In 2002, the fire season began unusually early, and the Forest Service ordered an agencywide spending freeze on all nonessential expenses beginning in early July. By doing so, the Forest Service ensured that enough funds were available to pay for suppression costs. However, at the end of the fiscal year, there were substantial funds left in some programs, and officials believed that more projects could have been completed. In an effort to avoid this situation and to complete more projects while still providing for suppression costs in 2003, the Forest Service did not start transferring funds until mid-August and, even then, did not order a spending freeze. In addition, agency officials focused on spending money earlier in the year, so that they could complete more projects before funds were transferred for suppression. After funds were transferred, some programs had nearly depleted their financial resources. Nevertheless, agency officials said they continued spending in a number of cases because they had made commitments to contractors or others, or because expenses such as vehicle maintenance were essential. At year-end, some programs were in deficit. For example, all 11 forests in the Forest Service’s southwestern region ended 2003 with deficits in at least 30 percent of the programs from which transfers were made. Seven of the 11 forests had deficits in 50 percent or more of these programs. Another factor that contributed to 2003 program deficits was that the Forest Service used unreliable estimates to determine the amount of money available for transfers. Specifically, when the Forest Service made transfers in 2003, its headquarters officials estimated the minimum balance necessary for each program by projecting salary needs for the remainder of the fiscal year and adding a small amount for contingencies. The estimate was based on two pay periods in July, and, in most cases, headquarters transferred all of the balance above this estimated amount. However, headquarters officials made this transfer without adequately consulting the regions or local forest units to obtain information on their specific salary needs for the remainder of the fiscal year. As such, in some cases, the salary estimates were understated because some staff were on suppression duty during the two pay periods and the suppression program was paying their salaries. Consequently, when these staff returned from suppression duty before the end of the fiscal year, the balance remaining was not always sufficient to cover their salary costs. According to headquarters officials, they used rough salary estimates because suppression program funds were nearly depleted and they needed to make transfers immediately, leaving inadequate time for forest-level officials—who have access to detailed payroll information—to estimate salary costs. Nevertheless, officials in the Washington Office directed regional and forest-level officials to ensure that all full-time staff continued to be paid in full. In order to do so, in some cases, staff worked in alternate programs so that they could be paid through those programs. In other cases, agency officials continued to draw salaries from depleted programs, and, as a result, the programs went into deficit. Further, to avoid this situation, some officials said their managers encouraged them to go on fire suppression detail where there was a need, so that their salaries would be paid from the suppression program. Forest Service officials indicated they used the following year’s appropriation to replenish the programs that went into deficit; however, this practice reduced the amount of funds available for that year’s program of work. Finally, if transfers to pay for wildfire suppression continue, project cancellations and delays, strained relationships, and management difficulties will likely continue and be compounded. According to agency officials, some impacts have yet to become apparent. For example, some projects are funded in one year with the expectation that the funds will be spent over several years as the project is implemented. For such projects, the impacts of transfers may only become apparent as the project nears its completion. Additionally, when projects are deferred to the next year, agency officials often must use resources originally dedicated to other projects. The result is a domino effect: deferring one year’s projects displaces the next year’s projects, which must in turn be deferred to the following year. Furthermore, because of 2003 program deficits, the impacts of funding transfers will continue into 2004. For programs that were in deficit at the end of 2003, officials had to first pay off the deficit at the beginning of 2004, effectively reducing their annual budget and the number of projects they will be able to fund. If funding transfers continue, the agencies and the Congress will repeatedly confront difficult decisions in determining how much funding to transfer from which programs and how much to reimburse. In making such decisions, the Forest Service and Interior have attempted to minimize impacts to programs and projects, but neither agency systematically tracks such impacts at a national level. To identify the impacts of funding transfers on its programs in 2003, Forest Service officials collected some information about impacts from regional offices. However, the information was neither consistent nor comprehensive because not every region provided it, and those that did, provided it in different forms with varying degrees of detail. Enhancing their understanding of how funding transfers affect programs could improve the ability of the agencies and the Congress to minimize negative impacts to programs and projects. In 2003, the Forest Service added a feature to its accomplishment reporting system to track the impacts of funding transfers. The feature allows agency officials to identify which national performance goals are affected by transfers and to what extent. For example, officials can identify how many acres of land were not acquired because of funding transfers. However, there are several agency programs that do not use this system to track their accomplishments. If more programs used this system and tracked accomplishment shortfalls caused by funding transfers, the Forest Service and the Congress would have more comprehensive information and could make more informed decisions about wildfire suppression funding, transfers, and reimbursements. Interior similarly could refine its existing accomplishment tracking systems to collect nationwide information about the impacts of transfers on their programs. Because accomplishment information is compiled at the end of the fiscal year, it would be of limited value in determining potential effects of current year transfers before they are made. Nevertheless, nationwide information on impacts from prior years could help agency officials and the Congress make informed decisions about current year transfers and reimbursements. To help mitigate the negative impacts of funding transfers, the Forest Service and Interior should improve their method for estimating annual suppression costs and the Congress could consider alternative approaches for funding wildfire suppression. The agencies’ use of a 10-year average of wildfire suppression costs to estimate and budget for annual suppression costs has substantially underestimated actual costs during the last several years. While uncertainties about the number of wildfires and their location, size, and intensity make it difficult to estimate wildfire suppression costs, alternative methods that more effectively account for these uncertainties and annual changes in firefighting costs should be considered for improving the information provided to agency and congressional decision makers. Additionally, to further mitigate the impacts of funding transfers, the Congress could consider several alternative approaches to funding wildfire suppression, such as establishing a governmentwide or agency- specific reserve account dedicated to funding wildfire suppression activities. Each alternative has advantages and disadvantages with respect to, among other things, reducing the need to transfer funds, creating incentives for agencies to contain suppression costs, and allowing for congressional review. Thus, selecting any alternative would require the Congress to make difficult decisions, including taking into consideration the effect on the federal budget deficit. For the past several years, the Forest Service, Interior, and the Congress have made annual wildfire suppression budget and appropriations decisions based on estimates of suppression costs that frequently have substantially understated actual costs. In developing their annual suppression budgets, the Forest Service and Interior use a 10-year average of suppression costs to estimate annual suppression costs. The agencies calculate this estimate up to 2 years in advance of when suppression funds are actually needed. The Congress also uses this estimate in deciding how much to appropriate for wildfire suppression activities. However, since 1990, these annual estimates frequently have understated actual suppression costs by hundreds of millions of dollars, as illustrated in figure 7. In fact, over the last 5 years, the estimates have understated actual suppression costs by about $1.8 billion. This shortfall in funding to cover actual suppression costs has occurred, in part, because the agencies and the Congress developed annual budget requests and made appropriation decisions for suppression activities on the basis of these estimates. In funding suppression activities based on these estimates, the Congress was able to fund, and the agencies were able to address, other program priorities without negatively affecting the federal budget deficit. However, in doing so, the agencies have had insufficient funds to pay for all suppression activities in recent years because of the increase in the number and intensity of wildfires and the costs to suppress them. As a result, the agencies have had to transfer hundreds of millions of dollars from other programs. Alternative methods should be considered for improving the suppression cost estimates that are provided to agency and congressional decision makers for use in estimating and funding wildfire suppression costs. Agency officials acknowledged that the 10-year average has substantially understated actual suppression costs in recent years. Although agency officials indicated they have considered alternative methods for improving the forecasts, they believe that the 10-year average is a reasonable and inexpensive way to estimate wildfire suppression costs. However, the usefulness of a 10-year average is limited when actual costs change rapidly from year to year, as they have recently. Furthermore, because the average is presented as a “point estimate” of likely costs instead of in conjunction with a range of cost estimates reflecting the uncertainties of wildfires, it may convey an unwarranted sense of precision to decision makers. For example, as shown in figure 7, recent actual suppression costs have been higher than earlier levels. Agency officials believe that recent abnormal drought conditions have contributed to unusually large and catastrophic wildfires that are much more expensive to suppress than typical fires prevalent for most of the previous 10 years. In addition, over the last few years, the cost of fighting wildfires in the wildland urban interface has risen significantly due to the number of homes built in these areas and the increased resources needed to protect them from wildfires. Also, costs related to the use of aircraft to fight wildfires, especially insurance rates, have increased significantly since September 11, 2001. Alternative methods that more effectively account for annual changes in expenditures and that convey the uncertainties associated with making the forecasts should be considered for improving the information provided to agency and congressional decision makers. For example, an estimate based on a weighted 10-year average, in which more weight in the average is given to recent expenditures relative to older ones, may be more effective in accounting for annual changes in expenditures. This information could provide agency and congressional decision makers with more useful data to develop budget requests and fund suppression activities at a level that reduces the need for funding transfers and subsequent reimbursements. However, in doing so, higher estimated costs for suppression could result, at least in the near term. In this context, the Congress would have to make difficult decisions about whether to increase funding for wildfire suppression to more closely reflect estimated costs, and, if so, whether to reduce appropriations to other government programs in order to avoid adding to the federal budget deficit. In addition to the agencies refining their estimates of suppression costs, the Congress also could consider alternative funding approaches to further mitigate the effects of funding transfers on agency programs and reduce the need to provide supplemental appropriations. For example, the Congress might consider creating an emergency reserve account that is governmentwide or agency-specific, and that provides a specific amount of funds when the reserve is created or allows for as much funding as is necessary. Each alternative has advantages and disadvantages related to influencing the need for transferring funds, creating incentives for the agencies to contain suppression costs, and allowing for congressional review. We previously issued two reports, and the Congressional Budget Office issued testimony, that presented various alternatives for funding wildfire suppression and other emergency needs. Some of the alternatives presented in these reports and testimony are summarized in table 2 and described below: Reserve accounts provide early recognition that there will likely be a demand on federal resources for natural disastersthus providing greater transparency in the budget process. The greater the amount of funds in the reserve account, the less likely agencies would need to transfer funds from other programs. Reducing the need to transfer funds would mitigate the need for supplemental appropriations that have added hundreds of millions of dollars to the federal budget deficit. However, the greater the amount of funds in the reserve account, the more difficult it would be for the Congress to limit total government spending. On the other hand, if the Congress limited the amount of funds appropriated for wildfire suppression, including the amount in the reserve account, there would be a greater chance that the agencies would need to transfer funds, and the Congress would need to reimburse the transfers through supplemental appropriations. The amount and accessibility of funds in the reserve account also may affect the agencies’ incentives to contain the costs of suppression activities. However, the effect of such incentives would likely be limited, given that many unpredictable and uncontrollable factors affect the costs of fire suppression activities. The Congress could create a governmentwide reserve account into which funds normally appropriated to agencies having responsibility for addressing unforeseen situations and emergencies would be appropriated. These agencies would include not only the Forest Service and Interior, but also the Federal Emergency Management Agency and the Department of Defense, among others. Combining the emergency funds of all these agencies into one account might alleviate the need for supplemental appropriations, because in any given year an increase in spending for one agency may be offset by a lower than usual spending by another agency. Without supplemental appropriations, there would be no increase in the budget deficit. A possible disadvantage of using a governmentwide reserve that is funded annually is that it could produce the expectation that the entire fund should be spent each year and, as the year progresses, claims on the fund might increase. Similarly, a governmentwide reserve might not provide incentives for agencies to contain the costs of wildfire suppression. A governmentwide reserve account could be created using funds designated as no-year money, so that funds not spent in a given year remain in the account for use in following years. Under such an account, there would be no incentive to spend the entire fund each year. To further control the use of the reserve account, the agencies’ access to the fund could be tied to specific criteria. Criteria could parallel those previously offered by OMB in designating funds as an emergency requirement; namely, that the emergency (1) require a necessary expenditurean essential or vital expenditure, not one that is merely useful or beneficial; (2) occur suddenlyquickly coming into being, not building up over time; (3) be urgenta pressing and compelling need requiring immediate action; (4) be unforeseennot predictable or anticipated as a coming need; and (5) not be permanentthe need to fund is temporary. Nevertheless, whether the funds are designated as no-year or not, additional funding could still be needed at year-end. If so, the agencies would need to transfer funds from other program accounts, and the Congress would have to choose between providing supplemental appropriations to reimburse the funding transfers—which would add to the federal budget deficit—or providing no reimbursements. In such cases, even if the agencies did need to transfer funds, the amount transferred would be less than it would have been without the reserve. Another approach for funding wildfire suppression activities cited in one of our earlier reports is to establish agency-specific reserve accounts for those agencies that regularly respond to federal emergencies and require those agencies to satisfy criteria similar to the OMB criteria previously described, before the funds are released. Agency-specific reserve accounts could be funded through a permanent, indefinite appropriation, which would provide as much funding as needed for specific purposes and would always be available for those purposes without any further action by the Congress. A permanent, indefinite appropriation would eliminate the need to transfer funds from other programs and to provide supplemental appropriations to reimburse funding transfers. A disadvantage of an indefinite appropriation is that if actual expenditures exceed the estimates, the federal budget deficit will be greater than anticipated. A disadvantage of a permanent appropriation is that it would lessen the opportunity for the Congress to regularly review the efficiency and effectiveness of fire suppression activities, because such reviews are typically conducted during the annual appropriations process. Alternatively, funding for agency-specific reserve accounts could be provided through a current, indefinite appropriation, which provides as much funding as needed for the current fiscal year. Funding wildfire suppression using a current, indefinite appropriation would allow the Congress to periodically review suppression activities through the annual appropriations process since the Congress would appropriate reserve funds each year. However, an indefinite appropriation could still result in higher than estimated costs and a higher than anticipated federal budget deficit. Additionally, any indefinite appropriation would have no inherent incentives for the agencies to contain suppression costs because the funding level would be unlimited. Agency-specific reserve accounts also could be funded by a definite appropriation with a specific amount of funds, not to be exceeded in a given year. With such limits, there would be an incentive for the agencies to contain suppression costs. As with a current, indefinite appropriation, the Congress could review suppression activities each year during its annual appropriations process. This alternative also could avoid increasing the federal budget deficit if appropriations to other agency program accounts were reduced by an amount corresponding to the amount in the reserve. However, should suppression costs be higher than the amount provided in the reserve account for the current year, a decision would need to be made on whether to transfer funds from other agency programs and, if so, whether to reimburse the funding transfers with a supplemental appropriation that would increase the federal budget deficit. Recently, the Senate Committee on the Budget has proposed an option for funding wildfire suppression activities in its resolution on the budget for fiscal year 2005. The resolution would provide for a reserve account funded through a definite appropriation of up to $500 million in additional annual funding for fiscal years 2004 through 2006. The funds in the account would be available to the Forest Service and Interior for fire suppression activities only if (1) the agencies are initially appropriated funds equal to or greater than the 10-year average of wildfire suppression costs and (2) the initial appropriations are insufficient to cover actual costs. Such an alternative would add to the federal budget deficit, unless the $500 million was reduced from other Forest Service, Interior, or other governmentwide programs when the Congress initially develops the federal budget. Further, if the funds in this account were sufficient to pay for all wildfire suppression activities above the 10-year average of suppression costs, there would be no need for the Forest Service or Interior to transfer funds from other program accounts. Had there been a $500 million reserve account available for wildfire suppression over the last 5 years, transfers would still have been necessary, but to a lesser extent, because suppression costs greatly exceeded the 10-year average in the extensive fire seasons in 2002 and 2003. During our visits with agency officials, we also discussed various other ideas for acquiring additional revenues to help pay for wildfire suppression. One idea was to charge fees for visitors, and state, local, and private entities that use federal land and resources, or to people who own property adjacent to federal forest land. For example, agencies could place a surcharge on existing user fees at national forests, parks, and other federal lands and use the additional revenue to help fund wildfire suppression. Another idea was to establish a special fund, similar to the K-V Fund, whose revenues would be dedicated to wildfire suppression. Revenues accruing to such a fund could come from fees charged for state, local, or private use of federal lands and its resources. Still another option was to levy a stipend on property owners’ federal tax for living in the wildland urban interface. Some other, more unconventional methods for mitigating the federal share of wildfire suppression costs also were discussed, such as allowing private companies to “sponsor” fire suppression efforts by providing funding as a measure of corporate goodwill to the local community. The advantage of all of these options would be to reduce the federal government’s burden to pay for fire suppression. Because the Forest Service and Interior do not have the authority to increase funding for suppression over the amount provided in appropriations, any of these options would require congressional action. Further, all of these options could strain agency relations with the public and others. Wildfires burn millions of acres of federal land every year, and the Forest Service and Interior spend billions of dollars suppressing them. In doing so, the agencies must balance the goal of protecting lives, property, and resources against the goal of containing costs. Transferring funds from other agency programs has helped fund needed wildfire suppression activities but not without a cost. These transfers have had widespread negative effects on Forest Service and Interior programs, projects, relationships, and management. In addition, the subsequent repayment of transfers with supplemental appropriations has added hundreds of millions of dollars to the federal budget deficit. These effects are likely to increase should funding transfers continue to be necessary in the future. Notwithstanding the uncertainties and difficulty of accurately estimating wildfire suppression costs, there are a number of factors that exacerbate the problem of transferring funds to help suppress wildfires. First, the methodology the agencies use to estimate suppression costs and determine their budgets is flawed because it does not adequately account for recent increases in the costs to suppress wildfires. Without this information, the Congress may have insufficient information to make prudent funding decisions. Second, the estimates generated by the monthly forecasting models have been inaccurate and did not provide a sound basis for deciding if, and to what extent, funding transfers were needed. Third, the agencies have inadequate information to understand the effects that transfers are having on their programs. As such, they are not well positioned to report the impacts to the Congress or make informed decisions about future transfers. Finally, the Forest Service’s method for estimating salary costs for the remainder of the fiscal year without adequately consulting with local forest units is problematic. Consequently, Forest Service headquarters officials do not have sufficiently accurate data to make transfer decisions and preclude agency programs from going into deficit. Because of the difficulty of accurately estimating suppression costs and the budget implications of providing additional funding for suppression, it is likely that suppression funding shortfalls will continue in the future. To minimize the budgetary implications, the intended goal should be to achieve an appropriate balance between the shortfall and the impacts that transfers will have on agency programs. Despite the best efforts to achieve this balance, there will be times when the size of the shortfall will create problems and impacts to important programs. Currently, there is no budgeting or funding mechanism that can help mitigate these impacts. Consequently, the agencies are forced to make difficult decisions to fund wildfire suppression at the expense of meeting other important programmatic goals. To help minimize the impacts of wildfire funding transfers on other agency programs and to improve the agencies’ budget estimates for wildfire suppression costs, we are recommending that the Secretaries of Agriculture and the Interior direct the Forest Service and Interior agencies to work together to improve their methods for estimating annual wildfire suppression costs by more effectively accounting for annual changes in costs and the uncertainties associated with wildfires in making these estimates, so that funding needs for wildfire suppression can be predicted with greater accuracy; annually conduct a formal assessment of how the agencies’ methods for estimating annual suppression costs and their monthly forecasting models performed in estimating wildfire suppression costs relative to actual costs, to determine if additional improvements are needed; and consistently track accomplishment shortfalls caused by funding transfers across all programs and include this information in annual accomplishment reports to provide agency decision makers and the Congress with better information for making wildfire suppression transfer and funding decisions. In addition, to more accurately determine the amount of funds available to transfer for wildfire suppression, we recommend that the Secretary of Agriculture direct the Chief of the Forest Service to estimate remaining salary needs for the fiscal year by consulting with local forest officials to obtain more current, specific payroll information, so that the risk of programs going into deficit can be reduced. To reduce the potential need for the Forest Service and Interior to rely on transferring funds from other programs to pay for wildfire suppression on public lands, the Congress could consider alternative funding approaches for wildfire suppression, such as, but not limited to, establishing a governmentwide or agency-specific emergency reserve account. We provided a draft of this report to the Secretaries of Agriculture and the Interior for review and comment. In responding, the Forest Service generally concurred with our findings and recommendations, and Interior concurred with our findings, but both agencies expressed concerns about our recommendation that they pursue alternative methods for estimating suppression costs. Both the Forest Service and Interior provided written comments, which are included in appendixes IV and V, respectively. Concerning our recommendation that the agencies improve their methods for estimating annual wildfire suppression costs, Interior commented that the current method—relying on the 10-year average of suppression costs— has proved to be “a reasonable and durable basis for suppression budgeting.” In support of this point, they noted that between 1995 and 1998, their actual suppression costs were below the 10-year average in three seasons. While we do not dispute this fact, we disagree that using the 10-year average has been “a reasonable and durable basis” for budgeting for suppression costs. As noted in our report, since 1990, the agencies’ reliance on the 10-year average has frequently resulted in annual budget estimates well below actual suppression costs. For Interior, the 10-year average was below actual costs in 8 of the 14 years since 1990; for the two agencies together, the 10-year average was below actual costs in 11 of the 14 years. Further, in the years when the average has understated actual costs, the difference has frequently been significant. Over the last 5 years, the 10-year average has understated the two agencies’ actual suppression costs by a total of about $1.8 billion. The Forest Service, in commenting on the use of the 10-year average, recognized the weaknesses associated with using the average to estimate annual wildfire suppression costs and noted the agency has looked into other methods that could more accurately predict future suppression costs. Some of the methods considered included using a 5-year average and inflating the historical costs to current dollar values. The Forest Service also noted that agency officials have discussed various modeling methods with researchers who said they could design a very expensive, complex model that would be more accurate than the 10-year average. We support the Forest Service for taking this initial step and encourage the agency to continue its efforts to identify and implement a cost-effective method for improving their estimates of annual suppression costs. As noted in our report, alternative methods that more effectively account for annual changes in expenditures and that convey the uncertainties associated with making the forecasts should be considered. The Forest Service also noted that our report does not address the potential consequences associated with not making the funding transfers. These negative impacts could include (1) not having adequate personnel and equipment, (2) an increase in the number of acres burned, and (3) an increase in the loss of homes and other property. While we believe that such impacts could result if funding transfers did not occur, the objective of our report is to identify the effects on Forest Service and Interior programs from which funds were actually transferred. In addition, Interior noted that shifting funds from one program to another within the wildland fire management account does not constitute a transfer, and, as such, we were incorrect in saying that Interior transferred funds from wildland fire programs. However, as noted in footnote 2, for ease of explanation throughout the report, we use the word “transfer” to refer both to the transfer of funds from one appropriation account to another and to the reprogramming of funds between programs within a single appropriation account. In either situation, the program from which the funds were taken is affected. The agencies also provided other comments and technical clarifications on the draft that we incorporated into the report where appropriate. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to the Chairman, Senate Committee on Energy and Natural Resources; the Chairman and Ranking Minority Member, House Committee on Resources; the Chairman and Ranking Minority Member, Subcommittee on Forests and Forest Health, House Committee on Resources; and other interested congressional committees. We will also send copies of this report to the Secretary of Agriculture; the Secretary of the Interior; the Chief of the Forest Service; the Directors of the Bureau of Land Management, the National Park Service, and the Fish and Wildlife Service; the Acting Director, Bureau of Indian Affairs; the Director, Office of Management and Budget; and other interested parties. We will make copies available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841. Key contributors to this report are listed in appendix VI. To determine the amount and the programs from which the U.S. Forest Service and the Department of the Interior transferred funds from 1999 through 2003, we collected data from the agencies’ headquarters on funds transferred and reimbursed by agency, program, and year. We identified the procedures the agencies follow when transferring and reimbursing funds by obtaining and reviewing agency strategy and planning documents and discussing the procedures actually used with agency officials in headquarters, regional offices, and local units. We also interviewed agency officials about the internal controls they use to carry out these procedures. In addition, we contacted budget officers at the Forest Service’s nine regional offices and obtained information on the amounts transferred and reimbursed to their units. Where appropriate, we also met with officials from the Interior agencies that are involved with wildfire suppression activities—the Bureau of Indian Affairs, Bureau of Land Management, U.S. Fish and Wildlife Service, and National Park Service. We interviewed budget officers in Forest Service and Interior headquarters about the financial systems they use to ensure the accuracy of the amount of funds transferred and reimbursed. We also interviewed Office of Management and Budget (OMB) officials to obtain their views on the reliability and completeness of the data they receive from each agency, as well as the adequacy of the agencies’ internal procedures to generate and track these data. Although we relied primarily on agency data, we compared these data with budget documents that corroborated the amounts transferred and reimbursed, where possible. We took appropriate measures to ensure that the Forest Service and Interior data on the amount of funds transferred and reimbursed and on actual suppression costs were sufficiently reliable for our purposes, and that the internal procedures at the Forest Service and Interior were sufficient to generate these data. In addition, we used the Gross Domestic Product Price Index to adjust dollars for inflation. To identify the impacts on agency programs from which funds were transferred, we interviewed Forest Service and Interior headquarters officials with responsibility for the affected programs. We also visited six Forest Service regional offices; 7 national forests, and contacted an additional 14 national forests; and visited seven Interior field offices. Although we did not visit all Forest Service regions, we chose a nonprobability sample of regions that reflected a range of funds transferred as well as the geographic diversity of program impacts (see table 3). Where appropriate, we also met with Interior field offices, grant recipients, a state forester, and representatives of nonprofit organizations who were collocated in the Forest Service regions visited. In addition, we contacted national forests officials in each region we visited and obtained detailed information regarding the specific impacts to their programs and projects. We interviewed representatives of impacted programs in both regional and national forest offices. We collected documents that listed the projects deferred or canceled due to transfers; obtained information on the cost of the impact to some affected projects; and—in some instances—conducted site visits to affected project locations. In our review of impacts, we focused on fiscal years 2002 and 2003 because in these 2 fiscal years transfers for wildfire suppression involved many more programs than they did previously. In reviewing the agencies’ methods for estimating suppression costs, we discussed the details of each method with agency officials responsible for developing the estimates. We reviewed the agencies’ current estimation methodology, compared the estimates with actual costs and discussed the reasons for differences between them with agency officials, and identified alternatives for estimating suppression costs. In reviewing alternative approaches for funding wildfire suppression, we reviewed previous GAO and Congressional Budget Office reports, as well as a Forest Service study related to budgeting for emergencies, and discussed alternative funding options with agency officials. We also obtained the views of OMB officials on other appropriation approaches for funding wildfire suppression. In addition, we analyzed Forest Service and Interior budget documents, congressional appropriations documents, and agency suppression cost forecasting models. We performed our work between July 2003 and March 2004 in accordance with generally accepted government auditing standards. These tables summarize the amount of funds transferred from and reimbursed to Forest Service and Interior programs from 1999 through 2003. Table 4 summarizes the funds transferred from major Forest Service programs and from the construction and land acquisition programs, as well as various fire programs, within Interior’s four agencies that have responsibility for wildfire suppression activities. Table 5 summarizes the amount of funds reimbursed to these programs over the 5-year period. The information presented in the tables was obtained from Forest Service and Interior budget documents. These tables include information on funds made available for transfers, by Forest Service region. Table 6 summarizes information on funds made available for transfers by region for each year from 1999 through 2003. Table 7 summarizes information on funds made available for transfers by major Forest Service program and by region, aggregated over the 5-year period. Table 8 summarizes information on funds made available for transfers as a percentage of overall budget authority for each Forest Service region and by major program for 2002. In addition to the individual named above, Nathan Anderson, Paul Bollea, Christine Bonham, Christine Colburn, John Delicath, Timothy Guinane, and Richard Johnson made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | In 2003, wildfires burned roughly 4 million acres, destroyed over 5,000 structures, took the lives of 30 firefighters, and cost over $1 billion to suppress. The substantial expense of fighting wildfires has exceeded the funds appropriated for wildfire suppression nearly every year since 1990. To pay for wildfire suppression costs when the funds appropriated are insufficient, the U.S. Forest Service and the Department of the Interior have transferred funds from their other programs. GAO was asked to identify (1) the amount of funds transferred and reimbursed for wildfire suppression since 1999, and the programs from which agencies transferred funds; (2) the effects on agency programs from which funds were taken; and (3) alternative approaches that could be considered for estimating annual suppression costs and funding wildfire suppression. The Forest Service and Interior transferred over $2.7 billion from other agency programs to help fund wildfire suppression over the last 5 years. On average, the Congress reimbursed agencies about 80 percent of the amounts transferred. Interior primarily used funds from its construction and land acquisition accounts. In recent years, the Forest Service used funds from many different programs; while before 2001, it transferred funds from a single reforestation program/timber sale area restoration trust fund. Transferring funds for wildfire suppression resulted in canceled and delayed projects, strained relationships with state and local agency partners, and difficulties in managing programs. These impacts affected numerous activities, including fuels reduction and land acquisition. Although transfers were intended to aid fire suppression, some projects that could improve agency capabilities to fight fires, such as purchasing additional equipment, were canceled or delayed. Further, agencies' relationships with states, nonprofit groups, and communities were negatively impacted because agency officials could not fulfill commitments, such as awarding grants. Transfers also disrupted the agencies' ability to manage programs, including annual and long-term budgeting and planning. Although the agencies took some steps to mitigate the impacts of transfers, the effects were widespread and will likely increase if transfers continue. To better manage the wildfire suppression funding shortfall, the agencies should improve their methods for estimating suppression costs by factoring in recent changes in the costs and uncertainties of fighting wildfires. Also, the Congress could consider alternative funding approaches, such as establishing a governmentwide or agency-specific reserve account. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Funds that support terrorist activity may come from illicit activities, such as counterfeit goods, contraband cigarettes, and illicit drugs, but are also generated through means such as fundraising by legal non-profit entities. According to State, it is the terrorists’ use of social and religious organizations and, to a lesser extent, state sponsorship, which differentiates their funding sources from those of traditional transnational organized criminal groups. While actual terrorist operations require only comparatively modest funding, international terrorist groups need significant amounts of money to organize, recruit, train, and equip new adherents and to otherwise support their activities. Simply, the financing of terrorism is the financial support, in any form, of terrorism or of those who encourage, plan, or engage in it. Some international experts on money laundering continue to find that there is little difference in the methods used by terrorist groups or criminal organizations in attempting conceal their proceeds by moving them through national and international financial systems. These experts simply define the term “money laundering” as the processing of criminal proceeds to disguise their illegal origin in order to legitimize their ill-gotten gains. Disguising the source of terrorist financing, regardless of whether the source is of legitimate or illicit origin, is important to terrorist financiers. If the source can be concealed, it remains available for future terrorist financing activities. The President established a Policy Coordination Committee under the auspices of NSC to ensure the proper coordination of counter-terrorism financing activities and information sharing among all agencies including the departments of Defense, Justice, Homeland Security, State, and the Treasury, as well as the intelligence and enforcement community. Treasury’s OFAC is the lead U.S. agency for administering economic sanctions, including blocking the assets of terrorists designated either by the United States unilaterally, bilaterally, or as a result of UN Security Council Resolution designations. The international community has acted on many fronts to conduct anti- money laundering and counter-terrorism financing efforts. For example, the UN has adopted treaties and conventions that once signed, ratified, and implemented by member governments have the effect of law and enhance their ability to combat money laundering and terrorist financing. FATF, an intergovernmental body, has set internationally recognized standards for developing anti-money laundering and counter-terrorism financing regimes and conducting assessments of countries abilities to meet these standards. In addition, the Egmont Group serves as an international network fostering improved communication, information sharing, and training coordination for 101 Financial Intelligence Units (FIU) worldwide. See appendix II for more information on key international entities and efforts. Countries vulnerable to terrorist financing activities generally lack key aspects of an effective counter-terrorism financing regime. According to State officials, a capable counter-terrorism financing regime consists of five basic elements: an effective legal framework, financial regulatory system, FIU, law enforcement capabilities, and judicial and prosecutorial processes. To strengthen anti-money laundering and counter-terrorism efforts worldwide, international entities such as the UN, FATF, World Bank and the IMF, as well as the U.S. government, agree that each country should implement practices and adopt laws that are consistent with international standards. U.S. government agencies participate in a number of interdependent efforts to address the transnational challenges posed by terrorist financing, including terrorist designations, intelligence and law enforcement, international standard setting, and training and technical assistance. U.S. agencies participate in global efforts to publicly designate individuals and groups as terrorists and block access to their assets. According to Treasury officials, international cooperation to designate terrorists and block their assets is important because most terrorist assets are not within U.S. jurisdiction and may cross borders. According to U.S. government officials, public designations discourage further financial support and encourage other governments to more effectively monitor the activities of the designated individual or organization. Importantly, designations may lead to the blocking of terrorist assets, thereby impeding terrorists’ ability to raise and move funds and possibly forcing terrorist to use more costly, less efficient, more transparent, and less reliable means of financing. U.S. agencies led by State have worked with the UN to develop and support UN Security Council resolutions to freeze the assets of designated terrorists. For example, in October 1999, the Security Council adopted UN Security Council Resolution 1267, which called on all member states to freeze the assets of the Taliban, and in December 2000, the Security Council adopted Resolution 1333, imposing targeted sanctions against Osama bin Laden and al Qaeda. Then, in response to the attacks of September 11, 2001, the UN Security Council adopted Resolution 1373, which required all UN member states to freeze funds and other financial assets or economic resources of persons who commit or attempt to commit, participate in, or facilitate terrorist acts. Later in January 2002 the UN Security Council adopted Resolution 1390, which consolidated the sanctions contained in Resolutions 1267 and 1333 against the Taliban, Osama bin Laden, and al Qaeda. In July 2005, the Security Council adopted Resolution 1617, which extends sanctions against al Qaeda and the Taliban and strengthens previous related resolutions. The UN has listed over 300 individuals and over 100 entities for worldwide asset blocks. Additionally, State’s Bureau of International Organization Affairs ensures designations related to al Qaeda, the Taliban, or Osama bin Laden are made worldwide obligations through the UN Security Council Resolution 1267 Committee and helped to craft and aided the adoption of UN Security Council Resolution 1373 and assisted in the creation of the UN Counterterrorism Committee to oversee its implementation. The United States has also participated in bilateral efforts to designate terrorists. For example, as of July 2005, the United States and Saudi Arabia jointly designated over a dozen Saudi-related entities and multiple individuals as terrorists or terrorist supporters, according to State. U.S. agencies including the Departments of Homeland Security (Homeland Security), Justice, State, and Treasury, and other law enforcement and intelligence agencies have implemented an interagency process to coordinate designating terrorists and blocking their assets. For example, State’s Economic Bureau coordinates policy implementation at the working level, largely through the network of Terrorism Finance Coordinating Officers located at embassies worldwide. Through this interagency coordination, the agencies work together to develop adequate evidence to target individuals, groups, or other entities suspected of terrorism or terrorist financing. As the lead agency for the blocking of assets of international terrorist organizations and terrorism-supporting countries, Treasury’s OFAC compiles the evidence needed to support terrorist designations conducted under the Secretary of the Treasury’s authority. State’s Office of the Coordinator for Counterterrorism follows the same process for terrorist designations conducted under the Secretary of State’s authority. State’s Bureau of International Organization Affairs may present this evidence to the UN for consideration by its members. According to a senior State official, the agencies work together on a regular basis to examine and evaluate new names and targets for possible designation and asset blocking and to consider other actions such as diplomatic initiatives with other governments and exchanging information on law enforcement and intelligence efforts. The U.S. strategy to combat terrorist financing abroad includes law enforcement techniques and intelligence operations aimed at identifying criminals and terrorist financiers and their networks across borders in order to disrupt and dismantle their organizations. Such efforts include intelligence gathering, investigations, diplomatic actions, sharing information and evidence, apprehending suspects, criminal prosecutions, asset forfeiture, and other actions designed to identify and disrupt the flow of terrorist financing. According to State, in order to achieve results, the intelligence community, law enforcement, and the diplomatic corps must develop and exploit investigative leads, employ advanced law enforcement techniques, and increase cooperation between domestic and foreign financial investigators and prosecutors. U.S. intelligence and law enforcement agencies work together and with foreign counterparts abroad, sometimes employing interagency or intergovernmental investigative taskforces. U.S. agencies work domestically and through their embassy attachés or officials or send agents on temporary duty to work with their foreign counterparts on matters of terrorist financing, including investigations. The Federal Bureau of Investigation is the lead domestic law enforcement agency on counter- terrorism financing and makes extensive contributions to law enforcement efforts abroad, including through their legal attachés. Homeland Security’s Bureau of Immigration and Customs Enforcement attachés and agents conduct work in trade-based money laundering and transporting of cash across borders. The Internal Revenue Service’s Criminal Investigation Division has an expertise in nonprofit organizations. The Drug Enforcement Administration focuses on the narcotics trafficking nexus. Moreover, Treasury’s Financial Crimes Enforcement Network (FinCEN) is the U.S. government’s FIU and, as such, serves as the U.S. government’s central point for the collection, analysis, and dissemination of financial intelligence to authorized domestic and international law enforcement and other authorities. Financial intelligence is sent through secured lines among the FIUs belonging to the Egmont Group and shared with law enforcement as part of these investigations. The U.S. government has taken an active role in the development and implementation of international standards to combat terrorist financing. The UN conventions and resolutions and FATF recommendations on money laundering and terrorist financing have set the international standards for countries to develop the legal frameworks, financial regulation, financial intelligence unit, law enforcement, and judicial/prosecutorial elements of an effective counter-terrorist financing regime. Importantly, international cooperation is a cornerstone of these international standards. The United States has signed each of the relevant UN conventions and implemented its obligations pursuant to UN Security Council Resolutions related to anti-money laundering and counter-terrorism financing. According to State and Justice officials, they have provided training on implementing the conventions, and State officials have drafted UN Security Council Resolutions concerning terrorist financing. For example, according to State, officials from Treasury and State met with the UN Security Council Resolution 1267 Committee in January 2005 to detail U.S. implementation of the resolution’s asset freeze, travel ban, and arms embargo provisions and proposed several ideas aimed at reinforcing current sanctions including enhancing the sanctions list, promoting international standards, and improving bilateral and multilateral cooperation. The U.S. government also plays a major role within FATF to draft and support international standards to combat terrorist financing. Treasury’s Office of Terrorism and Financial Intelligence chairs the U.S. delegation to the FATF and has chaired or co-chaired several FATF working groups, such as the FATF Working Group on International Financial Institution Issues and the FATF Working Group on Terrorist Financing. Treasury also develops U.S. positions, represents the United States at FATF meetings, and implements actions domestically to meet the U.S. commitment to the FATF. Other components within Treasury, such as FinCEN, and other U.S. government agencies, including Homeland Security, Justice, and State, and the federal financial regulators, are also represented in the U.S. delegation to FATF. For example, according to department officials, the Department of Justice provided the initial draft for the original eight FATF special recommendations on terrorist financing. Additionally, Homeland Security gave significant input into Special Recommendation IX on Cash Couriers due to the department’s expertise on detection of criminals’ cross-border movements of cash. Moreover, the U.S. government supports efforts to ensure that countries take steps to meet FATF standards. As a member of FATF, the United States participates in mutual evaluations in which each member’s compliance with the FATF recommendations is examined and assessed by experts from other member countries. Treasury also leads U.S. delegations to FATF-style regional bodies to assist their efforts to support implementation of FATF recommendations and conduct mutual evaluations. The U.S. strategy to combat terrorist financing abroad includes efforts to provide training and technical assistance to countries that it deems vulnerable to terrorist financing and focuses on the five basic elements of an effective anti-money laundering/counter-terrorism financing regime (legal framework, financial regulation, FIU, law enforcement, and judicial and prosecutorial processes). According to State, its Office of the Coordinator for Counterterrorism is charged with directing, managing, and coordinating all U.S. government agencies’ efforts to develop and provide counter-terrorism financing programs. The NSC established the State-led interagency TFWG to coordinate the delivery of training and technical assistance to the countries most vulnerable to terrorist financing. These countries are known as priority countries of which there are currently about two dozen. According to State’s Office of the Coordinator for Counterterrorism, foreign allies inundated the U.S. government with requests for assistance; therefore, TFWG developed a process to prioritize the use of limited financial and human resources. Although other vulnerable countries may be assisted through other U.S. government programs as well as through TFWG, according to State, based on NSC guidance, overall coordination is to take place through the TFWG process. (See appendix III for TFWG membership and process.) TFWG schedules assessment trips, reviews assessment reports, evaluates training proposals, and assigns resources for training. According to State officials, the U.S. government has conducted 19 needs assessment missions and provided training and technical assistance in at least one of the five areas of an anti- money laundering/counter-terrorist financing regime to over 20 countries. U.S. offices and bureaus, primarily within the departments of the Treasury, Justice, Homeland Security, and State, and the federal financial regulators provide training and technical assistance to countries requesting assistance through various programs using a variety of methods primarily funded by State and Treasury. Methods include training courses, presentations at international conferences, the use of overseas regional U.S. law enforcement academies or U.S.-based schools, and the placement of intermittent or long-term resident advisors for a range of subject areas related to building effective counter-terrorism and anti-money laundering regimes. For example, Justice provides technical assistance on drafting legislation that criminalizes terrorist financing and anti-money laundering. Treasury’s Office of Technical Assistance (OTA) provides assistance to strengthen the financial regulatory regimes of countries. In addition, Treasury’s FinCEN provides training and technical assistance including assistance in the development of FIUs, information technology assessments, and specialized analytical software and analyst training for foreign FIUs. (See appendix IV for key U.S. counter-terrorism financing and anti-money laundering training and assistance for vulnerable countries.) According to State, the U.S. government has also worked with international donors and organizations to leverage resources to build counter-terrorism financing regimes in vulnerable countries. According to State officials, they have worked with the United Kingdom, Australia, Japan, the European Union, the Organization of American States, the Asian Development Bank (ADB), IMF, and the World Bank on regional and country-specific projects. According to State, they have also funded the UN Global Program Against Money Laundering to place a mentor in one country for a year to assist with further development of its FIU. Similarly, Treasury officials said the department funded a resident advisor to the ADB as part of the Cooperation Fund for the Regional Trade and Financial Security Initiative. Treasury officials also state they have coordinated bilateral and international technical assistance with the FATF and the international financial institutions, such as the World Bank and IMF, which encompassed the drafting of legal frameworks, building necessary regulatory and institutional systems, and developing human expertise. According to State officials, efforts to share identified priorities and coordinate assistance by the major donor countries took a step forward at the June 2003 G-8 Summit with the establishment of the Counter-Terrorism Action Group, of which the United States is a member. The Counter-Terrorism Action Group has partnered with the FATF, providing that organization with a list of countries to which its members are interested in providing counter-terrorism financing assistance, so that the FATF could assess their technical assistance needs. FATF delivered those assessments to the Counter- Terrorism Action Group in 2004 and, according to State officials, the donors are now beginning to follow through with assistance programs. The U.S. government lacks an integrated strategy to coordinate the delivery of counter-terrorism financing training and technical assistance to countries vulnerable to terrorist financing. The effort does not have key stakeholder buy-in on roles and practices, a strategic alignment of resources with needs, or a process to measure and improve performance. As a result, the effort lacks effective leadership and consistent practices, an optimal match of resources to needs, and feedback on performance into the decision-making process. U.S. interagency efforts to coordinate the delivery of counter-terrorism financing training and technical assistance lack key stakeholder involvement and acceptance of roles and procedures. As a result, the overall effort lacks effective leadership, which leads to less than optimal delivery of training and technical assistance to vulnerable countries, according to agency officials. We have previously found that building a collaborative management structure across participating organizations is an essential foundation for ensuring effective collaboration; and strong leadership is critical to the success of intergovernmental initiatives. Moreover, involvement by leaders from all levels is important for maintaining commitment. Treasury, a key stakeholder, does not accept State’s position that State leads all U.S. counter-terrorism financing training and technical assistance efforts and disagreements continue between some Treasury and State officials concerning current TFWG coordination efforts. According to State officials, State leads the U.S. effort to provide counter-terrorism financing training and technical assistance to all countries the U.S. government deems vulnerable to terrorist financing. State bases its position on classified NSC documents focused primarily on TFWG, State documents, and authorizing legislation. Treasury, an agency that also funds as well as provides training and technical assistance, asserts that State overstates its role; according to Treasury, State’s role is limited to coordinating other U.S. agencies’ provision of counter-terrorist financing training and technical assistance in commonly agreed upon TFWG priority countries, and that there are numerous other efforts outside of States’ purview. Justice, an agency that provides training and technical assistance and receives funding from State, states that it respects the role that State plays as the TFWG chairman and coordinator and states that all counter-terrorism financing training and technical assistance efforts should be brought under the TFWG decision-making process. While supportive, Justice’s statement demonstrates that the span of State’s role lacks clarity and recognition in practice. Two senior Treasury OTA officials said they strongly disagree with the degree of control State asserts over decisions at the State-led TFWG regarding the delivery of training and technical assistance. According to a Treasury Terrorist Financing and Financial Crimes (TFFC) Senior Policy Advisor who attends TFWG, in practice the TFWG process is broken and State creates obstacles rather than coordinates efforts. According to officials from State’s Office of the Coordinator for Counterterrorism, who chair TFWG, the only problems are the lack of Treasury’s TFFC and OTA officials’ acceptance of State’s leadership over counter-terrorism financing efforts and separate OTA funding. Legislation authorizing the Departments of State and Treasury to conduct counter-terrorism financing training and technical assistance activities does not explicitly designate a lead agency. State derives its authority for these activities from the International Security and Development Cooperation Act of 1985, which mandates that the Secretary of State “coordinate” all international counter-terrorism assistance. Treasury’s primary authority for its assistance programs derives from a 1998 amendment to the Foreign Assistance Act of 1961, which authorized the Secretary of the Treasury, after consultation with the Secretary of State and the Administrator of the U.S. Agency for International Development, to establish a program to provide economic and financial technical assistance to foreign governments and foreign central banks. This provision further mandates that State provide foreign policy guidance to the Secretary of the Treasury to ensure that the program is effectively integrated into the foreign policy of the United States. State and Treasury officials also disagree on procedures and practices for the delivery of counter-terrorism financing training and technical assistance. State cited NSC guidance and an unclassified State document focusing on TFWG as providing procedures and practices for delivering training and technical assistance to all countries. Treasury officials told us that the procedures and practices were only pertinent to the TFWG priority countries and that there is no formal mandate or process to provide technical assistance to countries outside the priority list. Moreover, Justice officials told us that having procedures and practices for TFWG priority countries that differ from those for other vulnerable countries creates problems. This issue is further complicated by the lack of consistent and clear delineation between the countries covered by TFWG and other vulnerable countries also receiving counter-terrorism financing and anti- money laundering assistance funded through State and Treasury. Treasury officials told us that TFWG procedures and practices are overly structured and impractical and have not been updated to incorporate stakeholder concerns and that the overall process does not function as it should. State and Treasury officials cited numerous examples of disagreements on procedures and practices. For example: State and Treasury officials disagree on the use of OTA funding and contractors. According to Treasury officials, OTA funding should primarily be used to support intermittent and long-term resident advisors, who are U.S. contractors, to provide technical assistance. According to State officials, OTA should supplement State’s program, which primarily funds current employees of other U.S. agencies. State, Justice, and Treasury officials disagree on whether it is appropriate for U.S. contractors to provide assistance in legislative drafting efforts on anti-money laundering and counter-terrorism financing laws. State officials cited NSC guidance that current Justice employees should be primarily responsible for working with foreign countries to assist in drafting such laws and voiced strong resistance to use of contractors. Justice officials strongly stated that contractors should not assist in drafting laws and gave several examples of past problems when USAID and OTA contractor assistance led to problems with the development of foreign laws. In two examples, Justice officials stated that USAID and OTA contractor work did not result in laws meeting FATF standards. In another example, Justice officials reported that a USAID contractor assisted in drafting an anti-money laundering law that had substantial deficiencies and as a result Justice officials had to take over the drafting process. According to OTA officials, their contractors provide assistance in drafting laws in non-priority countries and OTA makes drafts available to Justice and other U.S. agencies for review and comment and ultimately the host country itself is responsible for final passage of a law that meets international standards. Treasury and State officials disagree on the use of confidentiality agreements between contractors and the foreign officials they advise. State officials said OTA’s use of confidentiality agreements impedes U.S. interagency coordination. State officials said the issue created a coordination problem in one country because a poorly written draft law could not be shared with other U.S. agencies for review and resulted in the development of an ineffective anti-money laundering law. Moreover, State officials said the continued practice could present future challenges. However, according to Treasury officials, this was an isolated case involving a problem with the contract and they said they have taken procedural steps to ensure the error is not repeated. State and Treasury officials disagree on the procedures for conducting assessments of country’s needs for training and technical assistance. Moreover, Treasury stated that their major concern is with State’s coordination process for the delivery and timing of assistance. According to TFWG procedures for priority countries, if an assessment trip is determined to be necessary, State is to lead and determine the composition of the teams and set the travel dates. This is complicated when a vulnerable country becomes a priority country. For example, in November 2004 Treasury conducted an OTA financial assessment in a nonpriority frontline country and subsequently reached agreement with that country’s central bank minister to put a resident advisor in place to set up a FIU. However, in May 2005, State officials denied clearance for Treasury official’s visit to the country, which has created a delay of 2.5 months (as of the end of July 2005). Treasury officials provided documentation to show that State was aware of their intention to visit the country in November 2004 to determine counter-terrorism and financial intelligence technical assistance needs, the official leading the segment of work was part of a larger on-going OTA effort in country, and that Treasury kept TFWG informed of the results of OTA’s work and continuing efforts. State officials expressed concern that the country had recently become a priority country. According to State TFWG officials, Treasury work needed to be delayed until a TFWG assessment could be completed. However, the U.S. embassy requested that Treasury proceed with its placement of a resident advisor and that the TFWG assessment be delayed. The U.S. government does not strategically align its resources with its mission to deliver counter-terrorism financing training and technical assistance. For strategic planning to be a dynamic and inclusive process, alignment of resources is a critical element. However, the U.S. government has no clear presentation of its available resources. Further, neither the U.S. government nor TFWG has made a systematic and objective assessment of the full range of available U.S. and potential international resources. As a result, decision-makers do not know the full range of resources available to match to the needs they have identified in priority countries and to determine the best match of remaining resources to needs for other vulnerable countries. Because funding is embedded with anti-money laundering and other programs, the U.S. government does not have a clear presentation of the budget resources that the departments of State and the Treasury allocate for training and technical assistance to counter terrorist financing. State and Treasury receive separate appropriations that can be used for training and technical assistance either by the agencies themselves, by funding other agencies, or by funding contractors. State primarily transmits its training and technical assistance funds to other agencies while Treasury primarily employs short and long term advisors through personal service contracts. Although various officials told us that funding for counter- terrorism financing training and technical assistance is insufficient, the lack of a clear presentation of available budget resources makes it difficult for decision-makers to determine the actual amount allocated to these efforts. State officials told us that they have two primary funding sources for State counter-terrorism financing training and technical assistance programs: Non-Proliferation, Anti-Terrorism, Demining, and Related Programs funding, which State’s Office of the Coordinator for Counterterrorism uses to provide counter-terrorism financing training and technical assistance to TFWG countries. Based on our analysis of State records, budget authority for this account included $17.5 million for counter- terrorism financing training and technical assistance for fiscal years 2002 to 2005. International Narcotics Control and Law Enforcement funding, which State’s Bureau of International Narcotics Control and Law Enforcement uses to provide counter-terrorism financing and anti-money laundering training and technical assistance to a wide range of countries, including seven priority countries between fiscal years 2002 and 2005, as well to provide general support to multilateral and regional programs. Based on our analysis of State records, budget authority for this account included about $9.3 million for anti-money laundering, counter-terrorism financing, and related multilateral and regional activities for fiscal years 2002-2005. State officials also told us that other State bureaus and offices provide counter-terrorism financing and anti-money laundering training and technical assistance (e.g., single-course offerings or small-dollar programs) as part of regional, country-specific, or broad-based programs. Treasury officials told us that OTA’s counter-terrorism financing technical assistance is funded through its Financial Enforcement program. Based on our analysis of Treasury records, Treasury OTA received budget authority totaling about $30.3 million for all financial enforcement programs for fiscal years 2002 to 2005. Counter-terrorism financing technical assistance and training funding is embedded within this program and cannot be segregated from anti-money laundering and other anti-financial crime technical assistance. One OTA official told us that as much as one third of the funds may be spent on programs countering financial crimes other than terrorist financing in any given year. The U.S. government, including the TFWG, has not made a systematic and objective assessment of the suitability of available resources. According to State and Treasury officials, no systematic analysis has been done to evaluate the effectiveness of contractors and current employees in delivering various types of counter-terrorism training and technical assistance. Decisions at TFWG appear to be made based on anecdotal information rather than transparent and systematic assessments of resources. According to the State Performance and Accountability Report for fiscal year 2004, a shortage of anti-money laundering experts continues to create bottlenecks in meeting assistance needs of requesting nations, including priority countries. State co-chairs of TFWG repeated this concern to us. According to State officials, U.S. technical experts are particularly stretched because of their frequent need to split their time between assessment, training, and investigative missions. Moreover, officials from State’s Office of the Coordinator for Counterterrorism cited the lack of available staff as a reason for their slow start in disbursing funding at TFWG’s inception. Treasury agrees with State that there may be a shortage of anti-money laundering experts in the U.S. government agencies who are available to provide technical assistance in foreign countries, however, according to Treasury there is not a shortage of U.S. experts who are recent retirees from the same U.S. government agencies. According to OTA officials, OTA can provide contractors, who are primarily recently retired U.S. government employees with years of experience from the same agencies that provide training to priority countries through State funding. However, State officials stated strong opinions that current U.S. government employees are better qualified to provide counter-terrorism financing training and assistance than contractors. State added that it is TFWG’s policy that current U.S. government experts should be used whenever possible, and that, when they are not available, the use of contractors in those instances should be coordinated with the expert agency or office. State officials cited several examples of priority and non-priority countries in which they felt that the work of OTA’s resident advisors did not result in improvements. However, State officials praised the work of one OTA resident advisor in a priority country as a best practice, and other agency and foreign officials supported this view. Further, one State official commended the quality of OTA’s law enforcement technical assistance. Nonetheless, State officials repeatedly stated that they need OTA funding and not OTA-contracted staff to meet current and future needs. A senior OTA official said that OTA has sought actively to provide programs in more priority countries, but State, as chair of the TFWG, has not supported their efforts. Specifically, as a portion of funds that OTA has obligated for financial enforcement related assistance between fiscal years 2002 and 2005, OTA has obligated approximately 11 percent to priority countries. State officials said that they welcomed more OTA participation in priority countries as part of the mix of applicable resources; however, they questioned whether OTA consistently provides high-quality assistance. Without a systematic assessment of the suitability of resources, the decision-makers do not have good information to consider when determining the best mix of government employees and contractors to meet needs. TFWG has a stated goal to encourage allies and international entities to contribute resources to help build the counter-terrorism financing capabilities of vulnerable countries and coordinate training and technical assistance activities, but it has not developed a specific strategy to do so. No one office or organization has systematically consolidated and synthesized available information on the counter-terrorism financing training and technical assistance activities of other countries and international entities and integrated this information into its decision- making process. State and Treasury officials stated that instead they have an ad hoc approach to working with allies and international entities on resource sharing for training and technical assistance. Resource sharing is not considered a priority at TFWG meetings because U.S. officials state that interagency issues take higher priority and little time is left to discuss international activities. At one TFWG meeting, U.S. agency officials discovered that different countries and organizations were putting resources into a priority country without any central coordination. TFWG found that Australia was already providing assistance to the FIU in this priority country and cancelled the assistance it was planning to provide in this area. Without a systematic way to consolidate, synthesize, and integrate information about international activities into the U.S. interagency decision-making process, the U.S. government cannot easily capitalize on opportunities for resource sharing with allies and international entities. The U.S. government, including TFWG, does not have a system in place to measure the performance results of its efforts to deliver training and technical assistance and to incorporate this information into integrated planning efforts. Without such a system the U.S. government cannot ensure that its efforts are on track. In August 2004, we found no system in place to measure the performance of U.S. training and technical assistance to combat terrorist financing. According to an official from Justice’s Office of Overseas Prosecutorial Development, Assistance and Training (OPDAT), an interagency committee led by OPDAT was set up to develop a system to measure results. In November 2004, OPDAT had an intern set up a database to track training and technical assistance provided through TFWG and related assistance results for priority countries. Because the database was not accessible to all TFWG members, OPDAT planned to serve as the focal point for entering the data collected by TFWG members. OPDAT asked agencies to provide statistics on programs, funding, and other information, including responding to questions concerning results by function which corresponded to the five elements of an effective counter- terrorism financing regime. OPDAT also planned to track key recommendations for training and technical assistance and progress made in priority countries as provided in FATF and TFWG assessments. However, little progress has been made in further development of the performance measures as the responsible OPDAT official told us they were waiting to hire the next intern to input the data. As of July 2005, a year later, at our exit meetings with OPDAT and the State TFWG chairs, OPDAT was still waiting for an intern to be hired to complete the project. Further, OPDAT and State officials confirmed that the system had not yet been approved or implemented by TFWG and, therefore, TFWG did not have a system in place to measure the performance results of its training and technical assistance efforts and incorporate this information into its planning. Treasury faces two accountability issues related to its terrorist asset blocking efforts. First, Treasury’s OFAC reports on the nature and extent of terrorists’ U.S. assets do not provide Congress the ability to assess OFAC’s achievements. Second, Treasury lacks meaningful performance measures to assess its terrorist designation and asset blocking efforts. While Treasury has developed some limited performance measures, OFAC officials acknowledged that the measures could be improved and are in the process of developing more meaningful performance measures aided by the development of an OFAC-specific strategic plan. Treasury’s annual reports to Congress on terrorists’ assets do not provide a clear description of the nature and extent of terrorists’ assets held in the United States. Federal law requires the Secretary of the Treasury, in consultation with the Attorney General and appropriate investigative agencies, to provide an annual report to Congress “describing the nature and extent of assets held in the United States by terrorist countries and organizations engaged in international terrorism.” Each year Treasury’s OFAC provides Congress with a Terrorist Assets Report that offers a year- end snapshot of dollar amounts held in U.S. jurisdiction for two types of entities: international terrorists and terrorist organizations and terrorism- supporting governments and regimes. In 2004 OFAC reported that the United States blocked almost $10 million in assets belonging to seven international terrorist organizations and related designees. The 2004 report also noted that the United States held more than $1.6 billion in assets belonging to six designated state sponsors of terrorism. While each annual report provides year-end statistics for each of the different entities, they do not provide a clear description of the nature and extent of assets held in the United States. The reports do not make a comparison of blocked assets over the years or offer explanations for many of the significant shifts between years. For example, the 2004 report stated that the United States held $3.9 million in al Qaeda assets, but it did not state that this represented a 400 percent increase in the value of al Qaeda assets held by the United State in 2003 or offer an explanation for this increase. In addition, the reports for years 2000 to 2004 offer no explanation for the decline in the value of U.S.-held Iranian government assets, which decreased from $347.5 million in 2000 to $82 million in 2004. While the 2000 report showed that the U.S. blocked $283,000 of Hizballah assets, future reports did not name Hizballah again or explain the status of these blocked assets. Senior OFAC officials acknowledge that the Terrorist Asset Reports do not provide a clear description of the nature and extent of assets blocked and is not useful to assessing progress on asset blocking. Treasury lacks effective performance measures to assess its terrorist designation and asset blocking efforts and demonstrate how these efforts contribute to Treasury’s goals of disrupting and dismantling terrorist financial infrastructures and executing the nation’s financial sanctions policies. Among the performance measures in Treasury’s 2004 Performance and Accountability Report that are related to designations and asset blocking are: An increase in the number of terrorist finance designations for which other countries join the United States, An increase in the number of drug trafficking and terrorist-related financial sanctions targets identified and made public, and An estimated number of sanctioned entities no longer receiving funds from the United States. Treasury officials recognize that these measures do not adequately assess progress made in designating terrorists and blocking their assets. In addition, they note that these measures do not help assess how efforts to designate terrorists and block their assets contribute to Treasury’s overall goals of disrupting and dismantling terrorists’ financial infrastructure and executing the nation’s financial sanctions policies. First, these measures are not specific to terrorist financing. Two of the three measures do not separate data on terrorists from data on other entities such as drug traffickers, hostile foreign governments, corrupt regimes, and foreign drug cartels, though OFAC officials acknowledged that they could have reported the data separately. Second, Treasury officials said that progress on asset blocking cannot simply be measured by totaling an amount of blocked assets at the end of the year, as the amounts may vary over the year as assets are blocked and unblocked. Third, Treasury has not developed measures to track other activities and benefits related to terrorist designations and asset blocking. For example, according to Treasury officials, Treasury’s underlying research to identify terrorist entities and their support systems is used to aid U.S. financial regulators, law enforcement, and other officials. However, Treasury does not have measures to track the use of this research when used for other agency activities, such as law enforcement investigations. Treasury officials also stated that terrorist designations have a deterrent value by discouraging further financial support. Measuring effectiveness in terms of deterrence can be very difficult, in part because the direct impact on unlawful activity is unknown, and in part because precise metrics are hard to develop for illegal and clandestine activities. According to Treasury officials, measuring effectiveness can also be difficult because many of these efforts run across U.S. government agencies and foreign governments and are highly sensitive. Treasury’s annual report and strategic plan, however, do not address the deterrent value of designations or discuss the difficulties in measuring its effectiveness. According to the Government Performance and Results Act (GPRA) of 1993, when it is not feasible to develop a measure for a particular program activity, the executive agency shall state why it is infeasible or impractical to express a performance goal for the program activity. OFAC officials told us that they are in the process of developing better measures for assessing its efforts related to designations and asset blocking (both quantitative and qualitative) and achievements made. In addition, OFAC officials are in the process of developing a strategic plan to guide OFAC’s efforts. This strategic planning effort will help OFAC develop measures to assess how their activities, including terrorist designations and asset blocking, contribute to Treasury’s goals of disrupting and dismantling the financial infrastructure of terrorists and executing the nation’s financial sanctions policies. According to GPRA, executive agency strategic plans should include a comprehensive mission statement, a set of general goals and objectives and an explanation of how they are to be achieved, and a description of how performance goals and measures are related to the general goals and objectives of the program. OFAC officials said they have initiated efforts to develop an OFAC-specific strategic plan and performance measures. In their technical comments in response to our draft report, officials stated that the new performance measures will relate to OFAC’s research, outreach, and sanctions administration. Additionally, officials stated that they expect OFAC’s new performance measures to be completed by December 1, 2005, and its new strategic plan to be completed by January 1, 2006. However, OFAC officials did not provide us with documentation to demonstrate that they have established milestones or a completion date to accomplish these projects. Without a strategy that integrates the funding and delivery of training and technical assistance by State and Treasury’s OTA, the U.S. government will not maximize the use of its resources in the fight against terrorist financing. Meanwhile, due to disagreements over leadership and procedures, some energy and talent of staff are wasted trying to resolve interagency disputes. By making decisions based on anecdotal and informal information rather than transparent and systematic assessments, managers cannot effectively address problems before they grow and become crises. Moreover, given the scarce expertise available to address counter-terrorism financing, by not focusing efforts on how all available U.S. and international resources can be integrated into a U.S. strategy the U.S. government may miss opportunities to leverage resources. Finally, without dedicating resources to complete a performance measurement system, the State-led TFWG effort does not have the information needed for optimal coordination and planning. The lack of accountability for Treasury’s designations and asset blocking program creates uncertainty about the department’s progress and achievements. U.S. officials with oversight responsibilities need meaningful and relevant information to ascertain the progress, achievements, and weaknesses of U.S. efforts to designate terrorists and dismantle their financial networks as well as hold managers accountable. Meaningful information may also help these officials understand the importance of asset blocking in the overall U.S. effort to combat terrorist financing as well as make resource allocation decisions across programs. The development of a strategic plan for OFAC could help facilitate the development of meaningful performance measures. To ensure that U.S. government interagency efforts to provide counter- terrorism financing training and technical assistance are integrated and efficient, particularly with respect to priority countries, we recommend that the Secretary of State and the Secretary of the Treasury, in consultation with NSC and relevant government agencies, develop and implement an integrated strategic plan for the U.S. government that does the following: designates leadership and provides for stakeholder involvement; includes a systematic and transparent assessment of U.S. government delineates a method for aligning the resources of relevant U.S. agencies to support the mission; and provides processes and resources for measuring and monitoring results, identifying gaps, and revising strategies accordingly. To ensure a seamless campaign in providing counter-terrorism financing training and technical assistance programs to vulnerable countries, we recommend that the Secretaries of State and the Treasury enter into a Memorandum of Agreement concerning counter-terrorism financing and anti-money laundering training and technical assistance. The agreement should specify: the roles of each department, bureau, and office with respect to conducting needs assessments and delivering training and technical assistance; methods to resolve disputes concerning OTA’s use of confidentiality agreements in its contracts when providing counter-terrorism financing and anti-money laundering assistance; and coordination of funding and resources for counter-terrorism financing and anti-money laundering training and technical assistance. To ensure that policy makers and program managers are able to examine the overall achievements of U.S. efforts to block terrorists’ assets, we also recommend that the Secretary of the Treasury provide in its annual Terrorist Assets Report to Congress more complete information on the nature and extent of asset blocking in the United States. Specifically, the report should include such information as the differences in amounts blocked between the years, when and why assets were unfrozen, the achievements and obstacles faced by the U.S. government, and a classified annex if necessary. In addition, as part of the Treasury’s ongoing strategic planning efforts, we recommend that the Secretary of the Treasury complete efforts to develop an OFAC-specific strategic plan and meaningful performance measures by January 1, 2006, and December 1, 2005 respectively, to guide and assess its asset blocking efforts. In view of congressional interest in U.S. government efforts to deliver training and technical assistance abroad to combat terrorist financing and the difficulty in obtaining a systematic assessment of U.S. resources dedicated to this endeavor, Congress should consider requiring the Secretary of State and the Secretary of the Treasury to submit an annual report to Congress on the status of the development and implementation of the integrated strategic plan and Memorandum of Agreement. We provided draft copies of this report to the Departments of Defense, Homeland Security, Justice, State, and Treasury for review. We received comments from the Departments of Justice, State, and the Treasury (see apps. V, VI, and VII). We did not receive agency comments from the Departments of Defense or Homeland Security. State did not agree with our recommendation that the Secretaries of State and Treasury, in consultation with the NSC and relevant government agencies, develop and implement an integrated strategic plan to coordinate the delivery of training and technical assistance abroad. State asserted that it has an integrated strategic plan and believes that a series of NSC documents and State’s Office of the Coordinator for Counterterrorism’s Bureau Performance Plan serve this purpose. We reviewed the NSC documentation which included minutes, an agreement, and conclusions, all of which serve as the NSC guidance for the TFWG. We also reviewed State’s Office of the Coordinator for Counterterrorism’s Bureau Performance Plan which we found included the Bureau’s objectives and performance measures for counterterrorist financing programs. We do not agree that this NSC guidance and Bureau performance plan constitute an integrated strategy that addresses the issues raised in this report because the effort, in practice, does not have key stakeholder buy-in on roles and practices, a strategic alignment of resources with needs, or a system to measure performance and use results and thus, an integrated strategy is still needed. It is also noteworthy that Treasury did not state in their comments that an integrated strategic plan existed or was in place, and they did not highlight these specific documents as serving this purpose. Treasury did not directly address our recommendation for an integrated strategic plan and proposed a new title, “Integrated U.S. Strategic Plan Needed to Improve the Coordination of Counterterrorism Finance Training and Technical Assistance to Certain Priority Countries,” which suggests agreement with the recommendation, but limits coverage of the integrated strategic plan to cover certain priority countries. Treasury also stated its agreement with the need for performance measures. It is useful to note that Treasury repeatedly placed the focus of efforts for improvement on priority countries and, as noted in its technical comments, does not recognize State’s leadership over the delivery of training and technical assistance other than to priority countries. For example, in Treasury’s technical comments Treasury stated that “State’s role is coordinating each U.S. government agency’s personnel and expertise to allow them to deliver the needed training in commonly agreed upon priority countries.” This comment further supports the need to better integrate efforts. Justice stated that with its role and expertise in providing training and technical assistance the fact that it was not included as an equal partner with State and Treasury in the recommendation was a critical omission. We note that Justice is one of a number of agencies referred to as relevant government agencies in the recommendation. Justice receives funding from State and, according to Justice, State has been supportive of Justice’s training and technical assistance efforts. State did not agree with our recommendation that the Secretaries of State and Treasury enter into a Memorandum of Agreement concerning counter- terrorism financing and anti-money laundering training and technical assistance. State stated that they have an interagency agreement. Based on our review, the classified document serving as an interagency agreement lacks clarity, familiarity, and buy-in from all levels of leadership within TFWG, particularly Treasury. State added that if there were to be a Memorandum of Agreement, they believe it should include all agencies engaged in providing training and technical assistance, not just State and Treasury. Treasury did not address this recommendation. However, Treasury stated that it wished to improve the effectiveness of U.S. technical assistance to combat terrorist financing particularly with respect to certain priority countries and stated that they would welcome suggestions as to how Treasury, together with relevant U.S. government agencies, can better achieve that goal. Justice again stated that the report’s critical flaw is omitting Justice from equal standing with State and Treasury. Justice noted that it is a key player and therefore should be involved in all interagency deliberations and decisions. We continue to believe that the Memorandum of Agreement should include the Secretaries of State and Treasury. State and Treasury both primarily fund and support U.S. government anti-money laundering and counter-terrorist financing training and technical assistance programs, and in Treasury’s case also provides considerable training and technical assistance abroad through current U.S. government employees and contractors. It is important that their programs and funding are integrated to optimize results. Other agencies are important stakeholders as they are recipients of this funding and support and should benefit from improved coordination between these two agencies. In response to our recommendation that the Secretary of the Treasury provide more complete information on the nature and extent of asset blocking in the United States in its annual Terrorist Assets Report to Congress, Treasury responded in its technical comments that we should “instead recommend that Congress consider discontinuing the requirement that Treasury produce the annual report altogether.” Treasury officials also stated that the Terrorist Assets Reports, “based upon the input of numerous government agencies, provides a snapshot of the known assets held in the United States by terrorist-supporting countries and terrorist groups at a given point in time. These numbers may fluctuate during each year and between years for a number of policy-permissible reasons. The amount of assets blocked under a terrorism sanctions program is not a primary measure of a terrorism sanctions program’s effectiveness, and countries that have been declared terrorist supporting, and whose assets are not blocked by a sanctions program, are already weary of holding assets in the United States.” Moreover, in its technical comments Treasury states that Terrorist Assets Reports were “not mandated or designed as an accountability measure for OFAC’s effectiveness in assisting U.S. persons in identifying and blocking assets of persons designated under relevant Executive orders relating to terrorism.” We acknowledge that the language in the mandate for the Terrorist Assets Reports did not explicitly designate the reports as an accountability measure; however, nothing in the statutory language or in the congressional intent underlying the mandate precludes Treasury from compiling and reporting information in the manner in which we have suggested in this report. Furthermore, we believe that inclusion of comparative information and additional explanation regarding significant shifts between years will enhance program reporting and congressional oversight. Justice did not comment on this recommendation. State commented that this recommendation was incomplete in that it makes no mention of State’s role in blocking assets and promoting international cooperation to achieve it; however, we did not include State in this recommendation because it is the Secretary of the Treasury who is responsible for producing the annual Terrorist Assets Reports. Treasury’s technical comments state that “OFAC officials have advised that OFAC’s new performance measures are expected to be completed by December 1, 2005, and its new strategic plan is expected to be completed by January 1, 2006.” We modified our recommendation to incorporate this new information. State suggested in its technical comments that we revise this recommendation to read, “In addition, we recommend that the Secretary of the Treasury, in consultation with the Departments of State and Justice and the other departments and agencies represented on the Terrorist Finance Policy Coordination Committee, establish milestones for developing a strategic plan and meaningful performance measures to guide and asses its asset blocking process.” We did not include the Secretary of State or the Attorney General in this recommendation because the scope of this objective focused solely on the accountability issues Treasury faces in its efforts to block terrorists’ assets. However, we recognize that State has an important role in targeting individuals, groups, or other entities suspected of terrorism or terrorist financing and added language to the section of the report on terrorist designations to clarify the roles of the multiple agencies involved in this effort. Treasury’s comments also suggested that we replace, in its entirety, our report’s third objective on the accountability of Treasury’s terrorist asset blocking efforts with revised text that Treasury officials had prepared. We reviewed the revised text and noted that many of Treasury’s points were already covered in our report. In some cases we added technical information to our report to help clarify the challenges that Treasury faces in assessing the impact of terrorist designation activities. None of these agencies provided comments on our matter for congressional consideration. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Attorney General, the Secretary of Defense, the Secretary of the Homeland Security, the Secretary of State, the Secretary of the Treasury, and interested congressional committees. We also will make copies available to others upon request. In addition the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4347 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VIII. Chairman of the Senate Caucus on International Narcotics Control, Charles E. Grassley; Senator Richard J. Durbin; and Chairman of the Senate Committee on Homeland Security and Governmental Affairs, Senator Susan M. Collins, asked us to (1) provide an overview of U.S. government efforts to combat terrorist financing abroad and (2) examine U.S. government efforts to coordinate the delivery of training and technical assistance to vulnerable countries. In addition, they requested that we examine specific accountability issues the Department of the Treasury (Treasury) faces in its efforts to block terrorists’ assets held under U.S. jurisdiction. To provide an overview of U.S. government efforts to combat terrorist financing abroad we reviewed documents and interviewed officials of U.S. agencies and departments and their bureaus and offices. We reviewed legislation, strategic plans, performance plans, and other agency documents, as well as relevant papers, studies, CRS and our own work to identify specific agency responsibilities and objectives. We assessed this information to identify key efforts and obtain further details and clarification and then validated and deconflicted information across agencies and departments in the United States and overseas in Indonesia, Pakistan, and Paraguay. We based country selection on Department of State (State) reporting of a nexus of terrorist financing, State reporting of assistance to the country, and the use of alternative financing mechanisms in the country. In each country, we discussed key challenges with responsible foreign and U.S. embassy officials, as well as with international entity officials. We grouped the different types of responsibilities into four categories (designations, intelligence and law enforcement, standards setting, or training) and validated these categories during meetings with U.S. government officials. Our scope and methodology were limited by lack of complete access to sensitive and classified information. We reviewed documents or interviewed officials from the following U.S. departments and agencies: the Central Intelligence Agency; the Department of Defense (Defense Intelligence Agency); the Department of Homeland Security (Immigration and Customs Enforcement and Customs and Border Protection); the Department of Justice (Bureau of Alcohol, Tobacco, Firearms, and Explosives; Criminal Division’s Asset Forfeiture and Money Laundering Section, Counter Terrorism Section, and Office of Overseas Prosecutorial Development, Assistance and Training; Drug Enforcement Administration; Federal Bureau of Investigation); the Department of State (Bureau of Economic and Business Affairs; Bureau for International Narcotics and Law Enforcement Affairs; Office of the Coordinator for Counterterrorism; Bureau of International Organizations; U.S. Mission to the United Nations; U.S. Agency for International Development; U.S. Missions to Indonesia, Pakistan, and Paraguay); the Department of the Treasury (Office of Technical Assistance, Office of Foreign Assets Control, Financial Crimes Enforcement Network, the Office of Terrorist Financing and Financial Crime, IRS’s Criminal Investigation Division). We also verified U.S. government efforts through documentation or interviews with officials from international entities including the Financial Action Task Force on Money Laundering, the International Monetary Fund (IMF), the World Bank, the United Nations (UN), and the Organization of American States. To examine U.S. government efforts to coordinate the delivery of training and technical assistance to vulnerable countries, we examined relevant laws; reports to Congress; National Security Council (NSC) guidance; strategic plans; policies and procedures; budget and expenditure information; agency and international entity training data, documents, and reports; contractor resumes; communications between embassies and agencies; interagency communications; web site information; and GAO criteria for strategic planning, collaboration, and performance results. In conjunction we interviewed U.S. agency officials involved in the Terrorist Financing Working Group (TFWG), U.S. officials involved in the delivery of training and technical assistance abroad, and others with a stake in counter-terrorism financing training and technical assistance, including officials of international entities, foreign government officials, and experts. We also observed a TFWG meeting. We requested an interview with the NSC, but the NSC declined our request. We assessed U.S. efforts to coordinate its efforts to deliver training and technical assistance to vulnerable countries using applicable elements of a sound strategic plan and identified those areas in which the U.S. effort is lacking. We assessed documentation and interviewed officials from: the Department of Homeland Security (Immigration and Customs the Department of Justice (Criminal Division’s Asset Forfeiture and Money Laundering Section, Counter Terrorism Section, and Office of Overseas Prosecutorial Development, Assistance and Training; Federal Bureau of Investigation); the Department of State (Bureau for International Narcotics and Law Enforcement Affairs, Office of the Coordinator for Counter-terrorism, Bureau of International Organizations, U.S. Mission to the United Nations, U.S. Agency for International Development; three U.S. embassies abroad) the Department of the Treasury (Office of Technical Assistance, Office of Foreign Assets Control, Financial Crimes Enforcement Network, the Executive Office for Terrorist Financing and Financial Crime, IRS’s Criminal Investigation Division); the Financial Action Task Force on Money Laundering (FATF); International financial institutions including the International Monetary Fund (IMF), World Bank, Asian Development Bank (ADB); and Inter- American Development Bank; the United Nations (UN), including the Counter Terrorism Committee and relevant UN Security Council resolutions sanctions committees and monitoring mechanisms; and the Organization of American States. To examine specific issues the U.S. government faces in holding Treasury accountable for its efforts to block terrorists’ assets held in the United States, we interviewed officials from the Department of the Treasury’s Office of Foreign Assets Control (OFAC) in Washington, D.C. We reviewed applicable laws, regulations, and executive orders to determine reporting requirements. In addition, we examined OFAC’s annual Terrorist Assets Reports for calendar years 1999 to 2004. Our examination focused on comparing the nature and extent of blocked assets by year for OFAC’s three programs targeting international terrorists and terrorist organizations and five programs targeting terrorism-supporting governments and regimes to understand how OFAC communicated changes in an organization or country’s blocked assets over time. We also compared and contrasted the performance measures for designation and asset blocking included in Treasury’s Strategic Plan for fiscal years 2003-2008 with those indicated in its Annual Performance and Accountability Report fiscal years 2003 and 2004. We reviewed testimony and speeches by OFAC and other Treasury officials, as well as information from OFAC’s website, to learn more about key issues and progress made on designating terrorists and blocking their assets. We reviewed relevant information from the Congressional Research Service and our own work. To assess the extent that Treasury’s performance measures for designating terrorists and blocking assets focused on factors critical to assessing performance, we reviewed a range of our previous reports examining factors that were necessary components for meaningful measures. We performed our work from March 2004 through July 2005 in accordance with generally accepted government auditing standards. United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances (1988) (The Vienna Convention): Defines concept of money laundering. Most widely accepted definition. Calls upon countries to criminalize the activity. Limited to drug trafficking as predicate offenseand does not address the preventative aspects. International Convention Against Transnational Organized Crime (2000) (The Palermo Convention): Came into force in September 2003. Obligates ratifying countries to criminalize money laundering via domestic law and include all serious crimes as predicate offenses of money laundering, whether committed in or outside of the country, and permit the required criminal knowledge or intent to be inferred from objective facts; establish regulatory regimes to deter and detect all forms of money laundering, including customer identification, recordkeeping, and reporting of suspicious transactions; authorize the cooperation and exchange of information among administrative, regulatory, law enforcement, and other authorities, both domestically and internationally; consider the establishment of a financial intelligence unit to collect, analyze, and disseminate information; and promote international cooperation. International Convention for the Suppression of the Financing of Terrorism (1999): Came into force in 2002. Requires ratifying countries to criminalize terrorism, terrorist organizations, and terrorist acts. Unlawful for any person to provide or collect funds with the intent that the funds be used for, or knowledge that the funds be used to conduct certain terrorist activity. Encourages states to implement measures that are consistent with FATF Recommendations. Security Council Resolutions 1267 and 1390: Adopted October 15, 1999 and January 16, 2002, respectively. Obligates member states to freeze assets of individuals and entities associated with Osama bin Ladin or members of al Qaeda or the Taliban that are included on the consolidated list maintained and regularly updated by the UN 1267 Sanctions Committee. Security Council Resolution 1373: Adopted September 28, 2001, in direct response to events of September 11, 2001. Obligates countries to criminalize actions to finance terrorism and deny all forms of support, freeze funds or assets of persons, organizations, or entities involved in terrorist acts; prohibit active or passive assistance to terrorists; and cooperate with other countries in criminal investigations and sharing information about planned terrorist acts. Security Council Resolution 1617: Adopted July 29, 2005. Extended sanctions against al Qaeda, Osama bin Laden, and the Taliban, and strengthened previous related resolutions. Convention Against Corruption (2003) Not yet in force--First legally binding multilateral treaty to address on a global basis the problems relating to corruption. As of July 11, 2005, 29 countries had become parties to the Convention (30 are required for the Convention to enter into force). Requires parties to institute a comprehensive domestic regulatory and supervisory regime for banks and financial institutions to deter and detect money laundering. Regime must emphasize requirements for customer identification, record keeping, and suspicious transaction reporting. Global Program Against Money Laundering: research and assistance project offering technical expertise, training, and advice to member countries on anti-money laundering and counter-terrorism financing upon request to raise awareness; help create legal frameworks with the support of model legislation; develop institutional capacity, in particular with the creation of financial intelligence units; provide training for legal, judicial, law enforcement, regulators and private financial sectors including computer-based training; promote regional approach to addressing problems; maintain strategic relationships; and maintain database and perform analysis of relevant information. The Counter Terrorism Committee (CTC): Established via Security Council Resolution1373 to monitor the performance of the member countries in building a global capacity against terrorism. Countries submit a report to the CTC on steps taken to implement resolution’s measures and report regularly on progress. CTC asked each country to perform a self-assessment of existing legislation and mechanism to combat terrorism in relation to Resolution 1373. CTC identifies weaknesses and facilitates assistance, but does not provide direct assistance. Financial Action Task Force on Money Laundering (FATF): Formed in 1989 by the G-7 countries, FATF is an intergovernmental body comprised of 31 member jurisdictions and two regional organizations whose purpose is to develop and promote policies, both at the national and international levels, to combat money laundering and the financing of terrorism. Its mission expanded to include counter- terrorism financing in October 2001. FATF has developed multiple partnerships with international and regional organizations in order to constitute a global network of organizations against money laundering and terrorist financing. The 40 Recommendations on Money Laundering: Constitute a comprehensive framework for anti-money-laundering designed for universal application. Permit country flexibility in implementing the principles according to the country’s own particular circumstances and constitutional requirements. Although not binding as law, have been widely endorsed by international community and relevant organizations as the international standard for anti-money laundering. The Special Recommendations on Terrorist Financing: FATF adopted eight special recommendations and recently added a ninth. FATF members use a self-assessment questionnaire of their country’s actions to come into compliance. The nine deal with both formal banking and non-banking systems: Ratification and implementation of UN instruments Criminalize the financing of terrorism and associated money laundering Freeze and confiscate terrorist assets Reporting suspicious transactions related to terrorism International co-operation Impose anti-money laundering requirements on alternative remittance systems Strengthen customer identification measures in international and domestic wire transfers Ensure that non-profit organizations are not misused Detecting and preventing cross-border transportation of cash by terrorists and other criminals. The Non-Cooperative Countries and Territories (NCCT) List: One of FATF’s objectives is to promote the adoption of international anti-money laundering/counter-terrorism financing standards by all countries. Thus, its mission extends beyond its own membership, although FATF can only sanction its member countries and territories. Thus, in order to encourage all countries to adopt measures to prevent, detect, and prosecute money launderers (i.e., to implement the 40 Recommendations) FATF adopted a process to identify non-cooperative countries and territories that serve as obstacles to international cooperation in this area and place them on a public list. An NCCT country is encouraged to make rapid progress in remedying its deficiencies or counter-measures may be imposed which may include specific actions by FATF member countries. Most countries make a concerted effort to be taken off the NCCT list because it causes significant problems to their international business and reputation. Monitoring Member’s Progress: Facilitated by a two-stage process: self assessments and mutual evaluations. In the self-assessment stage, each member annually responds to a standard questionnaire regarding its implementation of the recommendations. In the mutual evaluation stage, each member is examined and assessed by experts from other member countries. Ultimately, if a member country does not take steps to achieve compliance, membership in the organization can be suspended. There is, however, a sense of peer pressure and a process of graduated steps before these sanctions are enforced. Methodology for Anti-money laundering/Counter-terrorist Financing Assessments: FATF developed and adopted a comprehensive mutual assessment methodology for the 40 and special recommendations based on consultations with IMF, World Bank, and other standard setters, which provides international agreement and cooperation among standard setters and others for a methodology for assessing anti-money-laundering/counter terrorist- financing regimes based on the 40 and special recommendations. Typologies Exercise: FATF issues annual reports on developments in money laundering through its typologies report, which keeps countries current with new techniques or trends. International Monetary Fund (IMF) and World Bank: World Bank helps countries strengthen development efforts by providing loans and technical assistance for institutional capacity building. The IMF mission involves financial surveillance and the promotion of international monetary stability. Research and Analysis and Awareness-Raising: Conducted work on international practices in implementing anti-money laundering and counter-terrorist financing regimes; issued Analysis of the Hawala System discussing implications for regulatory and supervisory response; and developed comprehensive reference guide on anti-money- laundering/counter terrorist-financing presenting all relevant information in one source. Conducted Regional Policy Global Dialogue series with country, World Bank and IMF, development banks, and FATF-style regional bodies covering challenges, lessons learned, and assistance needed; and developed Country Assistance Strategy that covers anti-money laundering and counter-terrorism in greater detail in countries that have been deficient in meeting international standards. Assessments: Worked in close collaboration with FATF and FATF-style regional bodies to a produce single comprehensive Methodology for anti-money laundering/counter-terrorist financing assessments; and engaged in a successful pilot program of assessments of country compliance with FATF recommendations. In 2004, adopted the FATF 40 and special 9 recommendations as one of the 12 standards and codes for which Reports on the Observance of Standards and Codes can be prepared and made anti-money laundering/counter-terrorist financing assessments a regular part of IMF/World Bank work. World Bank and IMF staff participated in 58 of the 92 assessments conducted since 2002. Training and Technical Assistance: Organized training conferences and workshops, delivered technical assistance to individual countries, and coordinated technical assistance. Substantially increased technical assistance to member countries on strengthening legal, regulatory, and financial supervisory frameworks for anti-money-laundering/counter terrorist- financing. In 2002-2003 there were 85 country-specific technical projects benefiting 63 countries and 32 projects reaching more than 130 countries. Between January 2004 and June 2005 the World Bank and IMF delivered an additional 210 projects. In 2004, IMF and the World Bank decided to expand the anti-money laundering/counter-terrorist financing technical assistance work to cover the full scope of the expanded FATF recommendations following the successful pilot program of assessments. Egmont Group of Financial Intelligence Units: A forum for Financial Intelligence Units (FIU) to improve support for their respective national anti-money laundering and counter-terrorism financing programs. In June 2005 there were 101 member countries. The group fosters development of FIUs and the exchange of critical financial data among the FIUs. The group is involved in improving interaction among FIUs in the areas of communications, information sharing, and training coordination. The Egmont Group’s Principles for Information Exchange Between Financial Intelligence Units for Money Laundering Cases include conditions for the exchange of information, limitation on permitted uses of information, and confidentiality. Members of the Egmont Group have access to a secure private website to exchange information. As of 2004, 87 of the members were connected to the secure web. The group has produced a compilation of one hundred sanitized cases about the fight against money laundering from its member FIUs. Within the group there are five working groups—Legal, Outreach, Training/Communications, Operations, and Information Technology. The Egmont group is focusing on expanding its membership in the Africa and Asia regions. Counterterrorism Action Group (CTAG): CTAG includes the G-8 (Canada, France, Germany, Italy, Japan, Russia, the United Kingdom, and the United States) as well as other states, mainly donors, to expand counterterrorism capacity building assistance. CTAG goals are to analyze and prioritize needs and expand training and assistance in critical areas including counter-terrorism financing and other counterterrorism areas. CTAG also plans to work with the UN Counter-Terrorism Committee to promote implementation of Security Council Resolution 1373. In 2004, CTAG coordinated with FATF to obtain assessments of countries CTAG identified as priorities. FSRBs encourage implementation and enforcement of FATF’s 40 recommendations and special recommendations. They administer mutual evaluations of their members, which are intended to identify weaknesses so that the member may take remedial action. They provide members information about trends, techniques, and other developments for money laundering in their typology reports. The size, sophistication, and the degree to which the FSRBs can carry out their missions vary greatly. The FSRBs are Asia/Pacific Group on Money Laundering, Caribbean Financial Action Task Force, Council of Europe MONEYVAL, Eastern and Southern African Anti-Money Laundering Group, Eurasian Group on Combating Money Laundering and Financing of Terrorism, Financial Action Task Force Against Money Laundering in South America, Middle East and North Africa Financial Action Task Force, Inter-governmental Action Group Against Money Laundering (West Africa). Organization of American States— CICAD: Regional body for security and diplomacy in the Western Hemisphere with 34 member states. In 2004, the commission amended model regulations for the hemisphere to include techniques to combat terrorist financing, development of a variety of associated training initiatives, and a number of anti-money laundering/counter-terrorism meetings. Its Mutual Evaluation Mechanism included updating and revising some 80 questionnaire indicators through which the countries mutually evaluate regional efforts and projects. Worked with International Development Bank and France to provide training for prosecutors and judges. Based on agreement with Inter-American Development Bank for nearly $2 million, conducting two-year project to strengthen FIUs in eight countries. Evaluating strategic plans and advising on technical design for FIUs in region. Asian Development Bank (ADB): Established in 1966, the ADB is a multilateral development finance institution dedicated to reducing poverty in Asia and the Pacific. The bank is owned by 63 members, mostly from the region and engages in mostly public sector lending in its developing member countries. According to the ADB, it was one of the first multilateral development banks to address the money laundering problem, directly and indirectly, through regional and country assistance programs. The ADB Policy Paper, adopted on April 1, 2003, has three key elements: (1) assisting developing member countries in establishing and implementing effective legal and institutional systems for anti-money laundering and counter-terrorism financing, (2) increasing collaboration with other international organizations and aid agencies, and (3) strengthening internal controls to safeguard ADB's funds. The bank provides loans and technical assistance for a broad range of development activities including strengthening and developing anti-money laundering regimes. Basel Committee on Banking Supervision: Established by the central bank Governors of the Group of Ten countries in 1974, formulates broad supervisory standards and guidelines and recommends statements of best practice in the expectation that individual authorities will take steps to implement them through detailed arrangements - statutory or otherwise - which are best suited to their own national systems. Three of the Basel Committee’s supervisory standards and guidelines concern money laundering issues: (1) Statement on Prevention of Criminal Use of the Banking System for the purpose of Money Laundering (1988), which outlines basic policies and procedures that bank managers should ensure are in place; (2) Core Principles for Effective Banking Supervision (1997), which provides a comprehensive blueprint for an effective bank supervisory system and covers a wide range of topics including money laundering; and (3) Customer Due Diligence (2001), which also strongly supports adoption and implementation of the FATF recommendations. Anti-Money Laundering Guidance Notes for Insurance Supervisors and Insurance Entities (2002) is a comprehensive discussion on money laundering in the context of the insurance industry. Guidance is intended to be implemented by individual countries taking into account the particular insurance companies involved, the products offered within the country, and the country’s own financial system. Consistent with FATF 40 Recommendations and Basel Core Principles for Effective Banking Supervision. Paper was updated as Guidance Paper on Anti- Money Laundering and Combating the Financing of Terrorism (2004) with cases of money laundering and terrorist financing. A document based upon these cases is posted on Web site and updated, and new cases that might result from the FATF typology project are to be added. International Organization of Securities Commissions (IOSCO): Members regulate and administer securities and laws in their respective 105 national securities commissions. Core objectives are to protect investors; ensure that markets are fair, efficient, and transparent; and reduce systematic risk. Passed “Resolution on Money Laundering” in 1992. Principles on Client Identification and Beneficial Ownership for the Securities Industry (2004) is a comprehensive framework relating to customer due diligence requirements and complementing the FATF 40 recommendations. IOSCO and FATF have discussed further steps to strengthen cooperation among FIUs and securities regulators in order to combat money laundering and terrorist financing. laundering. Kingdom, and the United States. According to the State, TFWG is made up of various agencies throughout the U.S. government and convened in October 2001 to develop and provide counter-terrorism finance training to countries deemed most vulnerable to terrorist financing. TFWG is co-chaired by State’s Office of the Coordinator for Counterterrorism and the Bureau for International Narcotics and Law Enforcement Affairs and meets on a bi-weekly basis to receive intelligence briefings, schedule assessment trips, review assessment reports, and discuss the development and implementation of technical assistance and training programs. According to State the process is as follows: 1. With input from the intelligence and law enforcement communities, State, Treasury, and the Department of Justice (Justice), identify and prioritize countries needing the most assistance to deal with terrorist financing. 2. Evaluate priority countries’ counter-terrorism finance and anti-money laundering regimes with Financial Systems Assessment Team (FSAT) onsite visits or Washington tabletop exercises. State-led FSAT teams of 6-8 members include technical experts from State, Treasury, Justice, and other regulatory and law enforcement agencies. The FSAT onsite visits take about one week and include in-depth meetings with host government financial regulatory agencies, the judiciary, law enforcement agencies, the private financial services sector, and non- governmental organizations. 3. Prepare a formal assessment report on vulnerabilities to terrorist financing and make recommendations for training and technical assistance to address these weaknesses. The formal report is shared with the host government to gauge its receptivity and to coordinate U.S. offers of assistance. 4. Develop counter-terrorism financing training implementation plan based on FSAT recommendations. Counter-terrorism financing assistance programs include financial investigative training to “follow the money,” financial regulatory training to detect and analyze suspicious transactions, judicial and prosecutorial training to build financial crime cases, financial intelligence unit development, and trade-based money laundering for over/under invoicing schemes for money laundering or terrorist financing. 5. Provide sequenced training and technical assistance to priority countries in-country, regionally, or in the United States. 6. Encourage burden sharing with our allies, international financial institutions (e.g., IMF, World Bank, regional development banks), and through international organizations such as the UN, United Nations, the UN Counterterrorism Committee, FATF on Money Laundering, or the Group of Eight (G-8) to capitalize on and maximize international efforts to strengthen counter-terrorism finance regimes around the world. International Law Enforcement Academies (ILEAs). Regional academies led by U.S. agencies partnering with foreign governments to provide law enforcement training including anti-money laundering and counter-terrorism financing. ILEAs in Gaborone, Botswana; Bangkok, Thailand; Budapest, Hungary; and Roswell, New Mexico, train over 2,300 participants annually on topics such as criminal investigations, international banking and money laundering, drug-trafficking, human smuggling, and cyber-crime. Provides financial regulatory training and technical assistance to central banks, foreign banking supervisors, and law enforcement officials in Washington, D.C. and abroad, and participates in U.S. interagency assessments of foreign government vulnerabilities. Provides financial regulatory training through seminars and regional conference presentations in Washington, D.C. and abroad, and participates in U.S. interagency assessments of foreign government vulnerabilities. Provides law and border enforcement training and technical assistance to foreign governments, in conjunction with other U.S. law enforcement agencies and the ILEAs. Participates in assessments of foreign countries in the law and border enforcement arena. Assists in the drafting of money laundering, terrorist financing, and asset forfeiture legislation compliant with international standards for international and regional bodies and foreign governments. Provides legal training and technical assistance to foreign prosecutors and judges, in conjunction with Justice’s Office of Overseas Prosecutorial Development, Training and Assistance. Sponsors conferences and seminars on transnational financial crimes such as forfeiting the proceeds of corruption, human trafficking, counterfeiting, and terrorism. Participates in U.S. interagency assessments of countries’ capacity to block, seize, and forfeit terrorist and other criminal assets. Provides investigative and prosecutorial training and technical assistance to foreign investigators, prosecutors, and judges in conjunction with the Office of Overseas Prosecutorial Development, Training, and Assistance and other Department of Justice components. Provides law enforcement training on international asset forfeiture and anti-money laundering to foreign governments, in conjunction with other Department of Justice components and through ILEAs. Provides basic and advanced law enforcement training to foreign governments on a bilateral and regional basis and through ILEAs and the Federal Bureau of Investigation’s Academy in Quantico, Virginia. Developed a two-week terrorist financing course that was delivered and accepted as the U.S. government’s model. Participates in U.S. interagency assessments of countries’ law enforcement and counter-terrorism capabilities. Provides law enforcement training and technical assistance to foreign counterparts abroad in conjunction with other Department of Justice components. Provides legal and prosecutorial training and technical assistance for criminal justice sector counterparts abroad and through ILEAs in drafting anti-money laundering and counter-terrorism financing statutes. Provides Resident Legal Advisors to focus on developing counter-terrorism legislation that criminalizes terrorist financing and achieves other objectives. Conducts regional conferences on terrorist financing, including a focus on charitable organizations. Participates in U.S. interagency assessments to determine countries’ criminal justice system capabilities. Coordinate and fund U.S. training and technical assistance provided by other U.S. agencies to develop or enhance the capacity of a selected group of more than two dozen countries whose financial sectors have been used to finance terrorism. Also manage or provide funding for other anti-money laundering or counter-terrorism financing programs for Department of State, other U.S. agencies, IlEAs, international entities, and regional bodies. Leads U.S. interagency assessments of foreign government vulnerabilities. Provides law enforcement training for foreign counterparts and through ILEAs to develop the skills necessary to investigate financial crimes. Provides legal technical assistance to foreign governments by drafting legislation that criminalizes terrorist financing. Provides resident advisors to provide technical assistance to judicial officials in their home country. Provides financial intelligence training and technical assistance to a broad range of government officials, financial regulators, law enforcement officers, and others abroad with a focus on the creation and improvement of financial intelligence units. FinCEN’s IT personnel provide FIU technical assistance in two primary areas: analysis and development of network infrastructures and access to a secure web network for information sharing. Conducts personnel exchanges and conferences. Partners with other governments and international entities to coordinate training. Participates in assessments of foreign governments’ financial intelligence capabilities. Provides law enforcement training and technical assistance to foreign governments and through ILEAs to develop the skills necessary to investigate financial crimes. Provides financial regulatory training in Washington, D.C., and abroad for foreign banking supervisors. Office of Technical Assistance Provides a range of training and technical assistance including intermittent and long-term resident advisors to senior-level representatives in various ministries and central banks on a range of areas including financial reforms related to money laundering and terrorist financing. Conducts and participates in assessments of foreign government anti-money laundering regimes for the purpose of developing technical assistance plans. Participates in U.S. interagency assessments of countries’ counter-terrorism financing and anti-money laundering capabilities. Provides technical advice and practical guidance on how the international anti-money laundering and counter- terrorist financing standards should be adopted and implemented. The following are GAO’s comments on the Department of Justices’s letter dated September 29, 2005. 1. Justice expressed concern that the draft report does not recognize the significant role it plays in providing international training and technical assistance in the money laundering and terrorist financing areas. The report acknowledges the roles of multiple agencies, including Justice, in delivering training and technical assistance to vulnerable countries. Under the first objective we broadly describe the U.S. efforts to provide training and technical assistance to vulnerable countries and note that U.S. offices and bureaus, primarily within the departments of the Treasury, Justice, Homeland Security, and State, and the federal financial regulators, provide training and technical assistance to countries requesting assistance through various programs using a variety of methods funded primarily by the State and Treasury. Moreover, appendix IV includes Table 2, which summarizes key U.S. counter-terrorism financing and anti-money laundering training and technical assistance programs for vulnerable countries and lists contributions provided by Justice, as well as other relevant agencies. 2. Justice expressed dismay that the report focuses on the interaction of State and Treasury rather than the accomplishments of the TFWG. While a number of comments suggested including information indicative of the successes of agency efforts to address terrorist financing abroad, much of this information is outside of the scope of this report. However, we have made a number of changes in response to these comments. First, we have added information on the accomplishments of U.S. agencies to the report. Second, we have adjusted our first objective to clarify that we are providing an overview of U.S. agencies’ efforts to address terrorist financing abroad. Third, as we note in other comments, we have adjusted the title of the report to better reflect the focus of our work. 3. Justice notes that the report addresses a narrower issue than the title implies. We agree. We have revised the title of the report to focus on our key recommendation. 4. According to Justice, our report contains a critical flaw because it does not recognize Justice as a key player nor does it place Justice on equal standing with State and Treasury in the report’s recommendation and Memorandum of Agreement concerning training and technical assistance. Justice noted that it should be involved in all interagency deliberations and decisions. The report acknowledges the roles of multiple important agencies, including Justice, in delivering training and technical assistance to vulnerable countries. The report recommends that the Secretaries of State and the Treasury, develop and implement an integrated strategic plan in consultation with the NSC and relevant government agencies, of which Justice is one (see comment 1). We continue to believe that the Memorandum of Agreement should be limited to the Secretaries of State and Treasury. State and Treasury both primarily fund and support U.S. government anti-money laundering and counter-terrorist financing training and technical assistance programs, and in Treasury’s case also provides considerable training and technical assistance abroad through current U.S. government employees and contractors. It is important that their programs and funding be integrated to optimize results. Other agencies are important stakeholders, as they are recipients of this funding and support and should benefit from improved coordination between these two agencies. Justice primarily receives funding from State and, according to Justice, State has been supportive of Justice’s training and technical assistance efforts. 5. Justice states that contrary to the impression conveyed in the draft, it fully respects the “honest broker role” that State plays as the TFWG coordinator. We have added information from Justice to more accurately portray Justice’s support of State as TFWG coordinator in the Highlights page, Results in Brief, and body of the report. Justice provided information in its technical comments that we believe are important to the key findings and recommendations in this report. While we have addressed Justice’s technical comments as appropriate, we have reprinted and addressed specific technical comments below. 1. “The draft Report reflects that “Justice officials confirmed that roles and procedures [of the TFWG] were a matter of dispute.” The context suggests that DOJ [Department of Justice] does not accept the leadership of the State Department. That is not an accurate statement. DOJ strongly agrees that there needs to be a designated coordinator in this TFWG process and supports that role being given to the State Department, which has been an honest broker in the process and DOJ has abided by its procedures. DOJ agrees with the observation that the Treasury Department does not accept the State Department’s leadership or the procedures. . . .” “Justice officials confirmed that roles and procedures were a matter of dispute.” It would be more accurate to replace dispute with disagreement.” GAO response: Justice made these two comments concerning the statement in the draft report that “Justice officials confirmed that roles and procedures were a matter of dispute.” We added language to show that Justice is supportive of State’s role as coordinator of TFWG efforts and substituted the word “disagreement” for “dispute” when stating that “Justice officials confirmed that roles and procedures were a matter of disagreement.” 2. “The draft report references that AFMLS stated that “the Department of State’s leadership role is limited to its chairmanship of TFWG…” To be clear, this statement was not made to suggest that the TFWG be limited to priority countries, but rather that differing standards on procedures (particularly with DOJ leadership role in legislative drafting) for priority countries and vulnerable countries creates problems.” GAO response: In response to this point, we removed the report’s reference to AFMLS and noted that Justice officials told us that having procedures and practices for TFWG priority countries that differ from those for other vulnerable countries creates problems. The following are GAO’s comments on the Department of State’s letter dated October 3, 2005. 1. State noted in its comments that it does not believe the report accurately portrays the overall effectiveness and success of the Administration’s counter-terrorism finance programs. While a number of comments suggested including information indicative of the successes of agency efforts to address terrorist financing abroad, much of this information is outside of the scope of this report. However, we have made a number of changes in response to these comments. First, we have added information on the accomplishments of U.S. agencies to the report. For example, we added information on the number of needs assessment missions conducted and the number of countries receiving training and technical assistance. Second, we have adjusted our first objective to clarify that we are providing an overview of U.S. agencies' efforts to address terrorist financing abroad. Third, as we note in other comments, we have adjusted the title of the report to better reflect the focus of our work. 2. State commented that it has an integrated strategic plan which is evidenced through classified NSC Deputies Committee documentation and the Department of States’ Office of the Coordinator for Counterterrorism’s Bureau Performance Plan. We reviewed the NSC Deputies Committee documentation, which includes minutes, an agreement, and conclusions-- all of which serve as the NSC guidance for the TFWG. We also reviewed the performance plan, which includes the Office of the Coordinator for Counterterrorism’s objectives and performance measures for counter-terrorist financing programs and provides some performance indicators, such as the number of assessments and training plans completed. Although some aspects of a strategic plan for delivering training and technical assistance are included in these documents, we do not agree that this guidance and performance plan includes the elements necessary to constitute an integrated strategy for the coordination of the delivery of training and technical assistance abroad. In addition to not having a fully integrated strategy on paper, the NSC guidance lacks clarity, particularly regarding coverage of non-priority countries. The guidance also lacks familiarity and clear buy-in among the pertinent levels of agencies. As a result, the documents did not guide the actions of the agencies in actual practice. 3. State commented that “if the country team, interagency and host government agree on an implementation plan, TFWG determines the necessary funding for State to obligate to each agency with the appropriate expertise.” State added that it carefully monitors and can account for all of the funding Congress has appropriated for training programs coordinated through the TFWG, as provided in a classified report. Our report did not specifically address TFWG-reported obligations and expenditures, as this information focusing on priority countries was classified. Our report focused on the lack of transparency in the overall amount of funds available for all counter- terrorism training and technical assistance programs within State and the Treasury. Because funding is embedded with anti-money laundering and other programs, the U.S. government does not have a clear presentation of the budget resources that State and Treasury allocate for training and technical assistance to counter-terrorist financing as differentiated from other programs. Although various officials told us that funding for counter-terrorism financing training and technical assistance is insufficient, the lack of a clear presentation of available budget resources makes it difficult for decision-makers to determine the actual amount that may be allocated to these efforts. 4. We do not agree with State’s comment that TFWG has been very diligent in developing methods to measure its success. As of July 2005, the U.S. government, including TFWG, did not have a system in place to measure the results of its efforts to deliver training and technical assistance and to incorporate this information into integrated planning efforts. Our report acknowledges that an interagency committee was set up to develop a system to measure results and other efforts were undertaken to track training and technical assistance; however, according to agency officials, these efforts have not yet resulted in performance measures. 5. Based on our review of NSC and other documents provided by State, the U.S. government lacks an integrated strategy to coordinate the delivery of training and technical assistance. The classified document serving as an interagency agreement lacks clarity as well as familiarity and buy-in from all agencies and levels of leadership within TFWG, particularly Treasury. The NSC guidance was agreed to at the deputy level, and we found that many of the working level staff were not familiar with the guidance or the interpretation of the guidance and Treasury staff clearly did not have the same interpretation as State staff. 6. State noted that there are established methods to resolve disputes that arise through the interagency process and it is rare that the TFWG process cannot resolve issues. While there are guidelines for resolving disputes, in practice there are long-standing disagreements that have not been resolved. Based on discussions with agency officials and review of documentation, our report provides examples of long- standing disagreements that have not been resolved such as the use of contractors and procedures for conducting assessments of country’s needs for training and technical assistance. 7. State commented that it is the primary responsibility of the TFWG to coordinate all training and technical assistance and notes the existence of formal supporting documents. State commented that while it does not believe additional formal documents are necessary, if a Memorandum of Agreement concerning counter-terrorism financing and anti-money laundering training and technical assistance were to be developed, State commented that it should include all agencies involved in providing training and technical assistance. Our review as well as Treasury’s technical comments clearly shows that Treasury does not accept State’s position that TFWG’s primary responsibility is to coordinate all counter-terrorist financing training and technical assistance abroad. Treasury limits this role to priority countries. Based on our review of NSC and other documents provided by State, the U.S. government lacks an integrated strategy to coordinate the delivery of training and technical assistance. The classified document, which according to State serves as an interagency agreement, lacks clarity, familiarity, and buy-in from all levels of leadership within TFWG, particularly Treasury. State and Treasury both fund and support U.S. government anti-money laundering and counter-terrorist financing training and technical assistance programs, and Treasury also provides considerable training and technical assistance abroad through contractors and U.S. government employees. It is important that their programs and funding are integrated to optimize results. Other agencies are important stakeholders as they are recipients of this funding and support and would benefit from improved coordination between these two agencies. 8. State comments that our recommendation to the Secretary of the Treasury regarding Treasury’s annual Terrorist Assets Report to Congress was incomplete because it makes no mention of State’s role in blocking assets. Specifically we recommend that Treasury provide more complete information on the nature and extent of asset blocking in the United States in its annual Terrorist Assets Report to Congress. We did not incorporate the Secretary of State into this recommendation because the scope of our request for our third objective focused solely on the accountability issues Treasury faces in its efforts to block terrorists’ assets. State also expressed disappointment that our report did not include details on State’s role in terrorist designations. While our report provides an overview of how U.S. government agencies use designations to disrupt terrorist networks, we recognize that State has an important role and added language to provide more detail on State’s role in targeting individuals, groups, or other entities suspected of terrorism or terrorist financing. 9. In response to agency comments, we have revised the title of the report to focus on our key recommendation. 10. The scope of our second objective was to examine U.S. efforts to coordinate the delivery of training and technical assistance to vulnerable countries. The effort does not have key stakeholder buy-in on roles and practices, a strategic alignment of resources with needs, or a system to measure performance and incorporate this information into its planning efforts. According to agency officials, the lack of effective leadership leads to less than optimal delivery of training and technical assistance to vulnerable countries. Without a system to measure performance, the U.S. government and TFWG cannot ensure that its efforts are on track. 11. Although this report is based on unclassified information, GAO reviewed all unclassified and classified information provided by State in support of TFWG efforts. We believe that findings, conclusions, and recommendations accurately portray the interagency process. Moreover, we reviewed and incorporated additional information provided by State subsequent to issuing our draft to the agencies for comment to ensure that all available information was assessed. The following are GAO’s comments on the Department of the Treasury’s letter dated October 5, 2005. 1. Treasury notes in its comments that the report falls short in describing the comprehensive efforts of the U.S. government efforts to combat terrorist financing abroad. While a number of comments suggested including information indicative of the successes of agency efforts to address terrorist financing abroad, much of this information is outside of the scope of this report. However, we have made a number of changes in response to these comments. First, we have added information on the accomplishments of U.S. agencies to the report. For example, we added that Treasury has coordinated bilateral and international technical assistance with the FATF and the international financial institutions, such as the World Bank and International Monetary Fund, to draft legal frameworks, build necessary regulatory and institutional systems, and develop human expertise. Second, we have adjusted our first objective to clarify that we are providing an overview of U.S. agencies' efforts to address terrorist financing abroad. Third, as we note in other comments, we have adjusted the title of the report to better reflect the focus of our work. 2. Treasury suggests that the title of the draft report be modified to be consistent with the primary focus of the report. We agree and have revised the title of the report to focus on the key recommendations. 3. Treasury states that the report does not accurately characterize Treasury’s role in managing the U.S. government’s relationship with international financial institutions. We recognize that Treasury plays an important role and added more examples of Treasury’s relationship with international financial institutions as provided in Treasury’s technical comments. For example, we added Treasury’s relationship with an intergovernmental body --the Financial Action Task Force-- in setting international standards for anti-money laundering and counter- terrorism financing regimes. In addition, we added mentions of Treasury’s relationship with the Asian Development Bank, IMF and the World Bank. 4. Treasury comments that the report focuses on the difficulties and differences arising from the interagency process to coordinate training and technical assistance to combat terrorist financing abroad and fails to give due credit for the successes that have been achieved through unprecedented interagency coordination. Our report concludes that the U.S. government lacks an integrated strategy to coordinate the delivery of training and technical assistance because key stakeholders do not agree on roles and practices, there is not a clear presentation of what funding is available for counter-terrorism financing training and technical assistance, and a system has not been established to measure performance and incorporate this information into its planning efforts. Our report notes that, according to agency officials, the lack of effective leadership leads to less than optimal delivery of training and technical assistance to vulnerable countries. However, we have included some interagency accomplishments such as numbers of assessments in our description of training and technical assistance efforts under objective 1. To best provide evidence of the effectiveness of the U.S. government efforts, the U.S. government should continue to develop a system to measure performance and incorporate this information into its planning efforts. 5. In its comments, Treasury states that the report’s third objective on accountability issues appears somewhat incongruous in a report dedicated to U.S. counter-terrorism training and technical assistance. Our requesters asked us to address specific issues related to U.S. efforts to combat terrorist financing abroad, including accountability issues Treasury faces in its efforts to block terrorists’ assets held under U.S. jurisdiction, particularly with regard to the Treasury’s annual Terrorist Assets Reports. 6. We reviewed the revised text provided by Treasury for our report’s third objective on accountability issues the Department faces in its efforts to block terrorists’ assets held under U.S. jurisdiction. We noted that we already cover many of Treasury’s points in our report. However, in some cases we incorporated technical information to help clarify the challenges the department faces in assessing the impact of terrorist designation activities. In addition, we updated the report to reflect the most current status of Treasury’s efforts to establish performance measures for OFAC. Additionally, we acknowledge that the language in the mandate for the Terrorist Assets Reports did not explicitly design the reports as an accountability measure of the Office of Foreign Assets Control’s effectiveness in identifying and blocking terrorist assets; however, nothing in the statutory language or in the congressional intent underlying the mandate precludes Treasury from compiling and reporting information in the manner in which we have suggested in this report. Furthermore, we believe that inclusion of comparative information and additional explanation regarding significant shifts between years will enhance program reporting and congressional oversight. “The second paragraph of this section states, “First, although the Department of State asserts that it leads the overall effort to deliver training and technical assistance to all vulnerable countries, the Department of Treasury does not accept State in this role.” This statement should be clarified to reflect that while Treasury does acknowledge State’s role, it believes that State’s function is necessarily one of coordination. State’s role in this process is not to actually “deliver” assistance. Rather, Treasury believes that State’s role is coordinating each USG agency’s personnel and expertise to allow them to deliver the needed training in commonly agreed upon priority countries. Treasury also acknowledges that the draft report is helpful in pointing out that this coordination can and should be improved to facilitate more effective delivery of assistance in priority countries.” “The first paragraph contains the following statement “According to the Department of State, its Office of the Coordinator for Counterterrorism is charged with directing, managing, and coordinating all U.S. government agencies’ efforts to develop and provide counter-terrorism financing programs.” This statement is inaccurately overbroad, as Treasury (and likely other government agencies) has developed numerous counterterrorist financing programs to advance the core strategic aims identified in the 2003 NMLS [National Money Laundering Strategy]. It is more accurate to say that the department of State coordinates the USG provision of CFT technical assistance and training to priority countries.” “Substitute with the following language: ‘However, the TAR was not mandated or designed as an accountability measure for OFAC’s effectiveness in assisting U.S. persons in identifying and blocking assets of persons designated under relevant Executive orders relating to terrorism. The report, as mandated, was intended to provide only a snapshot view in time of terrorist assets held in the United States by terrorist countries and organizations.’” “Substitute with the following language: ‘OFAC officials have advised that OFAC’s new performance measures are expected to be completed by December 1, 2005, and its new strategic plan is expected to be completed by January 1, 2006.’” “In the second paragraph, the following language: “We also recommend that the Secretary of Treasury provide more complete information on the nature and extent of asset blocking in the United States in its Terrorist Assets Report to Congress and establish milestones for developing meaningful performance measures on terrorist designations and asset blocking activities…..” Should be replaced with the following language: . . . .“We also recommend Congress consider discontinuing the requirement that Treasury produce the annual Terrorist Assets Report to Congress. The report, based upon the input of numerous government agencies, provides a snapshot of the known assets held in the United States by terrorist-supporting countries and terrorist groups at a given point in time. These numbers may fluctuate during each year and between years for a number of policy-permissible reasons. The amount of assets blocked under a terrorism sanctions program is not a primary measure of a terrorism sanctions program’s effectiveness, and countries that have been declared terrorist supporting, and whose assets are not blocked by a sanctions program, are already wary of holding assets in the United States.’” GAO response: We noted Treasury’s position on this recommendation in our report. However, we continue to believe that the annual Terrorist Assets Report, with the incorporated changes, would be useful to policymakers and program managers in examining their overall achievements of the U.S. efforts to block terrorists’ assets. In addition to the contact named above, Christine Broderick, Assistant Director; Tracy Guerrero; Elizabeth Guran; Janet Lewis; and Kathleen Monahan made key contributions to this report. Martin de Alteriis, Mark Dowling, Jamie McDonald, and Michael Rohrback provided technical assistance. | Terrorist groups need significant amounts of money to organize, recruit, train, and equip adherents. U.S. disruption of terrorist financing can raise the costs and risks and impede their success. This report (1) provides an overview of U.S. government efforts to combat terrorist financing abroad and (2) examines U.S. government efforts to coordinate training and technical assistance. We also examined specific accountability issues the Department of the Treasury faces in its efforts to block terrorists' assets held under U.S. jurisdiction. U.S. efforts to combat terrorist financing abroad include a number of interdependent activities--terrorist designations, intelligence and law enforcement, standard setting, and training and technical assistance. First, the U.S. government designates terrorists and blocks their assets and financial transactions and supports similar efforts of other countries. Second, intelligence and law enforcement efforts include operations, investigations, and exchanging information and evidence with foreign counterparts. Third, U.S. agencies work through the United Nations and the Financial Action Task Force on Money Laundering to help set international standards to counter terrorist financing. Fourth, the U.S. government provides training and technical assistance directly to vulnerable countries and works with its allies to leverage resources. The U.S. government lacks an integrated strategy to coordinate the delivery of counter-terrorism financing training and technical assistance to countries vulnerable to terrorist financing. Specifically, the effort does not have key stakeholder acceptance of roles and procedures, a strategic alignment of resources with needs, or a process to measure performance. First, the Department of Treasury does not accept the Department of State leadership or the State-led Terrorist Financing Working Group's (TFWG) procedures for the delivery of training and technical assistance abroad. While supportive of the Department of State's role as coordinator of TFWG efforts, the Department of Justice officials confirmed that roles and procedures were a matter of disagreement. Second, the U.S. government does not have a clear presentation and objective assessment of its resources and has not strategically aligned them with its needs for counter-terrorist financing training and technical assistance. Third, the U.S. government, including TFWG, lacks a system for measuring performance and incorporating results into its planning efforts. The Treasury faces two accountability issues related to its terrorist asset blocking efforts. First, Treasury's Office of Foreign Assets Control (OFAC) reports on the nature and extent of terrorists' U.S. assets do not provide Congress the ability to assess OFAC's achievements. Second, Treasury lacks meaningful performance measures to assess its terrorist designation and asset blocking efforts. OFAC is in the process of developing more meaningful performance measures aided by its early efforts to develop an OFAC-specific strategic plan. Officials stated that OFAC's new performance measures will be completed by December 1, 2005, and its strategic plan will be completed by January 1, 2006; however, they did not provide us with documentation of milestones or completion dates. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
FFRDCs were first established during World War II to meet specialized or unique research and development needs that could not be readily satisfied by government personnel or private contractors. Additional and expanded requirements for specialized services led to increases not only in the size of the FFRDCs but also the number of FFRDCs, which peaked at 74 in 1969. Today, 8 agencies, including DOD, fund 39 FFRDCs that are operated by universities, nonprofit organizations, or private firms under long-term contracts. Federal policy allows agencies to award these contracts noncompetitively. The Office of Federal Procurement Policy within the Office of Management and Budget (OMB) establishes governmentwide policy on the use and management of FFRDCs. Within DOD, the Director of Defense Research and Engineering is responsible for developing overall policy for DOD’s 11 FFRDCs. The Director communicates DOD policy and detailed implementing guidance to FFRDC sponsors through a periodically updated management plan, and determines the funding level for each FFRDC based on the overall congressional ceiling on FFRDC funding and FFRDC requirements. Total funding for DOD’s FFRDCs was $1.25 billion in fiscal year 1995. DOD categorizes each of its FFRDCs as a systems engineering and integration center, a studies and analyses center, or a research and development laboratory. Appendix II provides information on each FFRDC, including its parent organization, primary sponsor, DOD funding, and staffing levels for fiscal year 1995. The military services and defense agencies sponsor individual FFRDCs and award and administer the 5-year contracts, typically negotiated noncompetitively, after reviewing the continued need for the FFRDC. Unlike a private contractor, an FFRDC accepts restrictions on its ability to manufacture products and compete for other government or commercial business. These restrictions are intended to (1) limit the potential for conflicts of interest when FFRDC staff have access to sensitive government or contractor data and (2) allow the center to form a special or strategic relationship with its DOD sponsor. Management fees are discretionary funds provided to FFRDCs in addition to reimbursement for incurred costs, and these fees are similar to profits private contractors earn. Two issues that have remained unresolved for many years are what should management fee be provided for and how should FFRDCs use this fee. As far back as 1969, we concluded that nonprofit organizations such as FFRDCs incur some necessary costs that may not be reimbursed under the procurement regulations, and we recommended that the Bureau of the Budget (now known as OMB), develop guidance that specified the costs contracting officers should provide fees to cover. In 1993, the Office of Federal Procurement Policy agreed that governmentwide guidance on management fees for nonprofit organizations was needed, but it has not yet issued detailed guidance. In the absence of such governmentwide guidance, recurring questions continue to be raised about how FFRDCs use their fees. In its 1994 report, for example, the DOD Inspector General concluded that FFRDCs used $43 million of the $46.9 million in fiscal year 1992 DOD fees for items that should not have been funded from fees. The bulk of this $43 million funded independent research projects that should have been charged to overhead, according to the report. The remainder funded otherwise unallowable costs and future requirements, which the report concluded were not necessary for FFRDC operations. Similarly, as we recently reported, DCAA reviewed fiscal year 1993 fee expenditures at the MITRE Corporation and concluded that just 11 percent of the expenditures reviewed were ordinary and necessary to the operation of the FFRDC. DCAA reported that MITRE used fees to pay for items such as lavish entertainment, personal expenses for company officers, and generous employee benefits. In our recent work at The Aerospace Corporation, we found that the corporation used about $11.5 million of its $15.5 million management fee for sponsored research. Aerospace used the remainder of its fee and other corporate resources for capital equipment purchases; real and leasehold property improvements; and other unreimbursed expenditures, such as contributions, personal use of company cars, conference meals, trustee expenses, and new business development expenses. DOD’s action plan recommended implementation of revised guidelines for management fee. Specifically, it recommended (1) moving allowable costs out of fee and reducing fee accordingly, and (2) establishing consistent policies on ordinary and necessary costs to be funded through fee. If effectively implemented, these actions should help to resolve many of the long-standing concerns over FFRDC use of management fee. Moving FFRDC-sponsored research out of fee would result in a substantial reduction of fee amount and should provide for more effective DOD oversight of FFRDC expenditures. This action would also subject all research to the Federal Acquisition Regulation cost principles applicable to cost-reimbursable items. Defining ordinary and necessary expenses which may be covered by fee is a more challenging issue, which may explain why the issue has gone unresolved for so long. However, until DOD issues specific guidance regarding ordinary and necessary expenses, debate will likely continue on whether fee can be used for such things as personal expenses for company officers, entertainment, and new business development. Although DOD’s action plan identifies the need for clarifying guidance, our understanding is that such guidance has not been issued. As a robust private-sector professional services industry grew to meet the demand for technical services, it became apparent that industry had the capability to perform some tasks assigned to FFRDCs. As early as 1962, the Bell Report noted criticism that nonprofit systems engineering contractors had undertaken work traditionally done by private firms. A 1971 DOD report stated, “It is pointless to say that the [systems engineering FFRDCs’] function could not be provided by another instrumentality....” According to this report, private contractors could also do the same type of work as the studies and analyses FFRDCs. The report pointed to the flexibility of using the centers and their broad experience with sponsors’ problems as reasons for continuing their use. More recently, the DOD Inspector General concluded that FFRDC mission statements did not identify unique capabilities or expertise, resulting in FFRDCs being assigned work without adequate justification. In a 1988 report, we pointed out that governmentwide policy did not require that FFRDCs be limited to work that industry could not do; FFRDCs could also undertake tasks they could perform more effectively than industry. FFRDCs are effective, we observed, partly because of their special relationship with their sponsoring agency. This special relationship embodies elements of access and privilege as well as constraints to limit their activities to those DOD deems appropriate. In 1995, the DSB and DOD’s Action Plan elaborated on and refined the concept of the FFRDC special relationship. According to DOD, FFRDCs perform tasks that require a special or strategic relationship to exist between the task sponsor and the organization performing the task. Table 1 shows DOD’s description of the characteristics of this special relationship. According to the DSB, this special relationship allows an FFRDC to perform research, development, and analytical tasks that are integral to the mission and operation of the DOD sponsor. The DSB and an internal DOD advisory group concluded that there is a continuing need for certain core work that requires the special relationship previously described. DOD concluded that giving such tasks to private contractors would raise numerous concerns, including questions about potential conflicts of interest. Accordingly, DOD has defined an FFRDC’s core work as tasks that (1) are consistent with the FFRDC’s purpose, mission, capabilities, and core competencies and (2) require the FFRDC’s special relationship with its sponsor. The DOD advisory group estimated that this core work represented about 6 percent of DOD’s research, development, and analytic effort. The DSB and the DOD advisory group also concluded that FFRDCs performed some noncore work that did not require a special relationship, and they concluded that this work should be transitioned out of the FFRDCs and acquired competitively. On the basis of these conclusions, DOD directed each sponsor to review its FFRDC’s core competencies, identify and prioritize the FFRDC’s core work, and identify the noncore work that should be transitioned out of the FFRDC. The core competencies the DOD sponsors identified appear to differ little from the scope of work descriptions that were in place previously. In several cases, sponsors seem to have simply restated the functions listed in an FFRDC’s scope of work description. In other cases, the core competencies summarized the scope of work functions into more generic categories. In February 1996, the Under Secretary for Defense (Acquisition and Technology) reported that DOD sponsors had identified $43 million, or about 4 percent of FFRDC funding, in noncore work being performed at the FFRDCs. According to the Under Secretary, ongoing noncore work is currently being transferred out of the FFRDCs. Even though DOD states that it is important to ensure that tasks assigned to the FFRDC meet the core work criteria, we believe it will continue to be difficult to determine whether a task meets these criteria. FFRDC mission statements remain broad, and core competencies appear to differ little from the previous scope of work descriptions. As we stated in our 1988 report, the special relationship is the key to determining whether work is appropriate for an FFRDC. However, determining whether one or more of the characteristics of the special relationship is required for a task may be difficult, since the need for an element of the special relationship is normally relative rather than absolute. For example, we believe DOD would expect objectivity in any research effort, but it may be difficult to demonstrate that a particular task requires the special degree of objectivity an FFRDC is believed to provide. Uncertainty about whether an FFRDC’s special relationship allows it to perform a task more effectively than other organizations also accompanies decisions to assign work to an FFRDC. In our 1988 report, we stated that full and open competition between all relevant organizations (FFRDCs and non-FFRDCs) could provide DOD assurance that it has selected the most effective source for the work. However, exposing FFRDCs to marketplace competition would fundamentally alter the character of the special relationship. While DOD has initiated a department-wide effort to define more clearly the work its FFRDCs will perform, the criteria DOD has developed remains somewhat general. Applying this criteria requires the making of judgements about the relative effectiveness of various sources for work in the absence of full information on capabilities which open competition would provide. It is doubtful that DOD’s criteria will be satisfactory to those critics who are seeking a simple and unambiguous definition of work appropriate for FFRDCs. The question of whether accepting work from organizations other than its sponsor impairs an FFRDC’s ability to provide objective advice has long been discussed. As early as 1962, the Bell Report raised this question but noted that no clear consensus had developed as to whether concerns about diversification were well founded. The report recognized that studies and analyses FFRDCs could effectively serve multiple clients but concluded that systems engineering organizations were primarily of value when they served a single client. During the early 1970s, DOD encouraged its FFRDCs to diversify into nonsponsor work. According to a 1976 DOD report, FFRDCs that did not diversify suffered efficiency and morale problems as their organizations shrank in the face of declining DOD research and development budgets. Nonetheless, this report recommended that the systems engineering FFRDCs limit themselves to DOD work and adjust their work forces in line with changes in the DOD budget. Regarding the MITRE Corporation, the report recommended that MITRE create a separate affiliate organization to carry out its nonDOD work. In 1994, Congress raised the issue that non-FFRDC affiliate organizations resulted in “...an ambiguous legal, regulatory, organizational, and financial situation,” and directed that DOD prepare a report on non-FFRDC activities. More recently, however, the DSB concluded that FFRDCs and their parent companies should be allowed to accept work outside the core domain only when doing so was in the best interests of the country; the DSB did not propose criteria for determining when accepting nonsponsor work was in the country’s best interests. Acceptance of nonsponsor work is now common at DOD’s FFRDCs. Except for the Institute for Defense Analyses, each parent organization performs some non-DOD work either within the FFRDC or through an affiliate organization created to pursue non-FFRDC work. Currently, six of the eight parent organizations that operate FFRDCs also operate one or more non-FFRDC affiliates. Some of these affiliates are quite small: the Center for Naval Analyses Corporation’s Institute for Public Research accounts for about 3 percent of the center’s total effort. Other affiliates are more significant: the MITRE Corporation’s two non-FFRDC affiliates accounted for about 11 percent of MITRE’s total effort, and the RAND Corporation’s 5 non-FFRDC divisions account for about 32 percent of its total effort. The Massachusetts Institute of Technology and Carnegie-Mellon University—parent organizations of the MIT Lincoln Laboratory and the Software Engineering Institute, respectively—each pursue a diverse range of non-FFRDC activities. DOD has recently become more active in seeking to oversee work its FFRDCs perform through non-FFRDC divisions. DOD sponsors have historically had the opportunity to oversee nonsponsor work performed within the FFRDC because the work is carried out under the FFRDC contracts that sponsors administer. This contract oversight mechanism is not available for non-FFRDC divisions. During 1995, for example, the Air Force expressed great reluctance to support The Aerospace Corporation’s proposal to establish a non-FFRDC affiliate, indicating that the Air Force was concerned that it could not avoid the perception of a conflict of interest. Similarly, the MITRE Corporation sought permission to create a separate corporate division to perform non-FFRDC work. Recognizing that this arrangement could create a potential for conflicts of interest, DOD required MITRE to spin off a separate corporation to carry out its non-FFRDC activities. DOD required this new corporation to have a separate board of trustees and its own corporate officers. Further, DOD required that no work be subcontracted between the two entities to preclude the sharing of employees involved in DOD work—and knowledge developed in the course of DOD work—with the new corporation. DOD’s recent update of its action plan stated that a new policy requires the use of stringent criteria for the acceptance of work outside the core by the FFRDC’s parent corporation. According to DOD, this new policy will ensure focus on FFRDC operations by the parent and eliminate concerns regarding “unfair advantage” in acquiring of such work. Currently, DOD plans to revise its FFRDC management plan, which would provide for greater oversight of non-FFRDC affiliates at all centers. These changes would require FFRDCs to agree to conduct non-FFRDC activities only if the activities are (1) subject to sponsor review and approval, (2) in the national interest, and (3) do not give rise to real or potential conflicts of interest. Even though it endorsed the need for organizations such as FFRDCs, a DSB study recently concluded that the public mistrusted DOD’s use and oversight of FFRDCs. A principal concern, according to the study, is that DOD assigns work to FFRDCs that can be performed as effectively by private industry and acquired using competitive procurement procedures. Further, DSB found that the lack of opportunities for public review and comment on DOD’s process for managing and assigning work to FFRDCs—available in the competitive contracting process—invites mistrust. To address public skepticism about DOD’s use and management of FFRDCs, DSB recommended the creation of an independent advisory committee of highly respected personnel from outside DOD. The committee would review the continuing need for FFRDCs, FFRDC missions, and DOD’s management and oversight mechanisms for FFRDCs. DOD’s action plan also recommended the establishment of an independent advisory committee to review and advise on FFRDC management. In late 1995, an independent advisory committee was established. The six committee members, who are either DSB members or consultants, represent both industry and government. The committee is responsible for reviewing and advising DOD on the management of its FFRDCs by providing guidelines on the appropriate scope of work, customers, organizational structure, and size of the FFRDCs; overseeing compliance with DOD’s FFRDC Management Plan; reviewing sponsor’s management of FFRDCs; reviewing the level and appropriateness of non-DOD and nonsponsor work performed by the FFRDCs; overseeing the comprehensive review process; and performing selected FFRDC program reviews. In January 1996, the advisory committee began a series of panel discussions at several FFRDCs, which were attended by DOD sponsor personnel and FFRDC officials. Representatives of our office attended the initial fact finding meetings and observed that the panel members appear to approach their task with the utmost seriousness and challenged the conventional wisdom by asking tough questions of both DOD and FFRDC officials. The advisory group plans to produce its first report in March 1996. Mr. Chairman, this completes my statement for the record. Defense Research and Development: Fiscal Year 1993 Trustee and Advisor Costs at Federally Funded Centers (GAO/NSIAD-96-27, Dec. 26, 1995). Federal Research: Information on Fees for Selected Federally Funded Research and Development Centers (GAO/RCED-96-31FS, Dec. 8, 1995). Federally Funded R&D Centers: Use of Fee by the MITRE Corporation (GAO/NSIAD-96-26, Nov. 27, 1995). Federally Funded R&D Centers: Use of Contract Fee by The Aerospace Corporation (GAO/NSIAD-95-174, Sept. 28, 1995). Defense Research and Development: Affiliations of Fiscal Year 1993 Trustees for Federally Funded Centers (GAO/NSIAD-95-135, July 26, 1995). Department of Defense Federally Funded Research and Development Centers, Office of Technology Assessment (OTA-BP-ISS-157, June 1995). Compensation to Presidents, Senior Executives, and Technical Staff at Federally Funded Research and Development Centers, DOD Office of the Inspector General (95-182, May 1, 1995). Comprehensive Review of the Department of Defense’s Fee-Granting Process for Federally Funded Research and Development Centers, Director for Defense Research and Engineering, May 1, 1995. The Role of Federally Funded Research and Development Centers in the Mission of the Department of Defense, Defense Science Board Task Force, April 25, 1995. Addendum to Final Audit Report on Contracting Practices for the Use and Operations of DOD-Sponsored Federally Funded Research and Development Centers, DOD Office of the Inspector General (95-048A, Apr. 19, 1995). DOD’s Federally Funded Research and Development Centers, Congressional Research Service (95-489 SPR, Apr. 13, 1995). Report on Department of Defense Federally Funded Research and Development Centers and Affiliated Organizations, Director for Defense Research and Engineering, April 3, 1995. Federally Funded R&D Centers: Executive Compensation at The Aerospace Corporation, (GAO/NSIAD-95-75, Feb. 7, 1995). Contracting Practices for the Use and Operations of DOD-Sponsored Federally Funded Research and Development Centers, DOD Office of the Inspector General (95-048, Dec. 2, 1994). Sole Source Justifications for DOD-Sponsored Federally Funded Research and Development Centers, DOD Office of the Inspector General (94-012, Nov. 4, 1993). DOD’s Federally Funded Research and Development Centers, Congressional Research Service (93-549 SPR, June 3, 1993). Inadequate Federal Oversight of Federally Funded Research and Development Centers, Subcommittee on Oversight of Government Operations, Senate Governmental Affairs Committee (102-98, July 1992). DOD’s Federally Funded Research and Development Centers, Congressional Research Service (91-378 SPR, Apr. 29, 1991). Competition: Issues on Establishing and Using Federally Funded Research and Development Centers (GAO/NSIAD-88-22, Mar. 7, 1988). Fiscal year 1995 dollars in millions Systems engineering and integration centers The Aerospace Corp. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the Department of Defense's (DOD) efforts to improve the management of its federally funded research and development centers (FFRDC), focusing on the: (1) guidelines to ensure that management fees paid to FFRDC are justified; (2) core work appropriate for FFRDC; (3) criteria for the acceptance of work outside of the core by FFRDC parent corporations; and (4) establishment of an independent advisory committee to review DOD management, use, and oversight of FFRDC. GAO noted that: (1) the DOD action plan recommended that management fees be revised to move allowable costs out of fee, reduce fees, and establish policies on ordinary and necessary costs; (2) it is difficult to determine whether tasks assigned to FFRDC meet core work criteria because the mission statements are broad and the core competencies offer little deviation from previous work descriptions; (3) six of the eight parent organizations that operate FFRDC also operate one or more non-FFRDC affiliates; and (4) DOD established an independent advisory committee to review FFRDC work, customers, and organizational structure and size, oversee FFRDC compliance with the DOD FFRDC management plan, review sponsor's management of FFRDC, determine the level and appropriateness of non-DOD and non-sponsor work performed by FFRDC, monitor the comprehensive review process, and perform selected FFRDC program reviews. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
EPA administers and oversees grants primarily through the Office of Grants and Debarment, 10 program offices in headquarters, and program offices and grants management offices in EPA’s 10 regional offices. Figure 1 shows EPA’s key offices involved in grants activities for headquarters and the regions. The management of EPA’s grants program is a cooperative effort involving the Office of Administration and Resources Management’s Office of Grants and Debarment, program offices in headquarters, and grants management and program offices in the regions. The Office of Grants and Debarment develops grant policy and guidance. It also carries out certain types of administrative and financial functions for the grants approved by the headquarters program offices, such as awarding grants and overseeing the financial management of these grants. On the programmatic side, headquarters program offices establish and implement national policies for their grant programs, and set funding priorities. They are also responsible for the technical and programmatic oversight of their grants. In the regions, grants management offices carry out certain administrative and financial functions for the grants, such as awarding grants approved by the regional program offices, while the regional program staff provide technical and programmatic oversight of their grantees. As of June 2003, 109 grants specialists in the Office of Grants and Debarment and the regional grants management offices were largely responsible for administrative and financial grant functions. Furthermore, 1,835 project officers were actively managing grants in headquarters and regional program offices. These project officers are responsible for the technical and programmatic management of grants. Unlike grant specialists, however, project officers generally have other primary responsibilities, such as using the scientific and technical expertise for which they were hired. In fiscal year 2002, EPA took 8,070 grant actions totaling about $4.2 billion.These awards were made to six main categories of recipients as shown in figure 2. EPA offers two types of grants—nondiscretionary and discretionary: Nondiscretionary grants support water infrastructure projects, such as the drinking water and clean water state revolving fund programs, and continuing environmental programs, such as the Clean Air Program for monitoring and enforcing Clean Air Act regulations. For these grants, Congress directs awards to one or more classes of prospective recipients who meet specific eligibility criteria; the grants are often awarded on the basis of formulas prescribed by law or agency regulation. In fiscal year 2002, EPA awarded about $3.5 billion in nondiscretionary grants. EPA has awarded these grants primarily to states or other governmental entities. Discretionary grants fund a variety of activities, such as environmental research and training. EPA has the discretion to independently determine the recipients and funding levels for grants. In fiscal year 2002, EPA awarded about $719 million in discretionary grants. EPA has awarded these grants primarily to nonprofit organizations, universities, and government entities. The grant process has the following four phases: Preaward. EPA reviews the application paperwork and makes an award decision. Award. EPA prepares the grant documents and instructs the grantee on technical requirements, and the grantee signs an agreement to comply with all requirements. Postaward. After awarding the grant, EPA provides technical assistance, oversees the work, and provides payments to the grantee; the grantee completes the work, and the project ends. Closeout of the award. EPA ensures that all technical work and administrative requirements have been completed; EPA prepares closeout documents and notifies the grantee that the grant is completed. EPA’s grantees are subject to the same type of financial management oversight as the recipients of other federal assistance. Specifically, the Single Audit Act requires grantees to have an audit of their financial statements and federal awards or program-specific audit if they spend $300,000 or more in federal awards in a fiscal year., Grantees submit these audits to a central clearinghouse operated by the Bureau of the Census, which then forwards the audit findings to the appropriate agency for any necessary action. However, the act does not cover all grants and all aspects of grants management and, therefore, agencies must take additional steps to ensure that federal funds are spent appropriately. In addition, EPA conducts in-depth reviews to analyze grantees’ compliance with grant regulations and specific grant requirements. Furthermore, to determine how well offices and regions oversee grantees, EPA conducts internal management reviews that address grants management. The Office of Management and Budget, as authorized by the act, increased this amount to $500,000 in federal awards as of June 23, 2003. closeouts, as a material weakness—an accounting and internal control system weakness that the EPA Administrator must report to the President and Congress. EPA’s fiscal year 1999 Federal Managers’ Financial Integrity Act report indicated that this oversight material weakness had been corrected, but the Inspector General testified that the weakness continued. In 2002, the Inspector General again recommended that EPA designate grants management as a material weakness. The Office of Management and Budget (OMB) also recommended in 2002 that EPA designate grants management as a material weakness. In its fiscal year 2002 Annual Report, EPA ultimately decided to maintain this issue as an agency-level weakness, which is a lower level of risk than a material weakness. EPA reached this decision because it believes its ongoing corrective action efforts will help to resolve outstanding grants management challenges. However, in adding EPA’s grants management to our list of EPA’s major management challenges in January 2003, we signaled our concern that EPA has not yet taken sufficient action to ensure that it can manage its grants effectively. We identified four key challenges that EPA continues to face in managing its grants. These challenges are (1) selecting the most qualified grant applicants, (2) effectively overseeing grantees, (3) measuring the results of grants, and (4) effectively managing grant staff and resources. In the past, EPA has taken a series of actions to address these challenges by, among other things, issuing policies on competition and oversight, conducting training for project officers and nonprofit organizations, and developing a new data system for grants management. However, these actions had mixed results because of the complexity of the problems, weaknesses in design and implementation, and insufficient management attention. EPA has not selected the most qualified applicants despite issuing a competition policy. The Federal Grant and Cooperative Agreement Act of 1977 encourages agencies to use competition in awarding grants. To encourage competition, EPA issued a grants competition policy in 1995. However, EPA’s policy did not result in meaningful competition throughout the agency, according to EPA officials. Furthermore, EPA’s own internal management reviews and a 2001 Inspector General report found that EPA has not always encouraged competition. Finally, EPA has not always engaged in widespread solicitation of its grants, which would provide greater assurance that EPA receives proposals from a variety of eligible and highly qualified applicants who otherwise may not have known about grant opportunities. EPA has not always effectively overseen grant recipients despite past actions to improve oversight. To address oversight problems, EPA issued a series of policies starting in 1998. However, these oversight policies have had mixed results in addressing this challenge. For example, EPA’s efforts to improve oversight included in-depth reviews of grantees but did not include a statistical approach to identifying grantees for reviews, collecting standard information from the reviews, and a plan for analyzing the results to identify and act on systemic grants management problems. EPA, therefore, could not be assured that it was identifying and resolving grantee problems and using its resources more effectively to target its oversight efforts. EPA’s efforts to measure environmental results have not consistently ensured that grantees achieve them. Planning for grants to achieve environmental results—and measuring results—is a difficult, complex challenge. However, as we pointed out in an earlier report, it is important to measure outcomes of environmental activities rather than just the activities themselves. Identifying and measuring the outcomes of EPA’s grants will help EPA better manage for results. EPA has awarded some discretionary grants before considering how the results of the grantees’ work would contribute to achieving environmental results. EPA has also not developed environmental measures and outcomes for all of its grant programs. OMB found that four EPA grant programs lacked outcome-based measures—measures that demonstrated the impact of the programs on improving human health and the environment—and concluded that one of EPA’s major challenges was demonstrating program effectiveness in achieving public health and environmental results. Finally, EPA has not always required grantees to submit work plans that explain how a project will achieve measurable environmental results. In 2002, EPA’s Inspector General reported that EPA approved some grantees’ work plans without determining the projects’ human health and environmental outcomes. In fact, for almost half of the 42 discretionary grants the Inspector General reviewed, EPA did not even attempt to measure the projects’ outcomes. Instead, EPA funded grants on the basis of work plans that focused on short-term procedural results, such as meetings or conferences. In some cases, it was unclear what the grant had accomplished. In 2003, the Inspector General again found the project officers had not negotiated environmental outcomes in work plans. The Inspector General found that 42 percent of the grant work plans reviewed—both discretionary and nondiscretionary grants—lacked negotiated environmental outcomes. EPA has not always effectively managed its grants staff and resources despite some past efforts. EPA has not always appropriately allocated the workload for staff managing grants, provided them with adequate training, or held them accountable. Additionally, EPA has not always provided staff with the resources, support, and information necessary to manage the agency’s grants. To address these problems, EPA has taken a number of actions, such as conducting additional training and developing a new electronic grants management system. However, implementation weaknesses have precluded EPA from fully resolving its resource management problems. For example, EPA has not always held its staff— such as project officers—accountable for fulfilling their grants management responsibilities. According to the Inspector General and internal management reviews, EPA has not clearly defined project officers’ grants management responsibilities in their position descriptions and performance agreements. Without specific standards for grants management in performance agreements, it is difficult for EPA to hold staff accountable. It is therefore not surprising that, according to the Inspector General, project officers faced no consequences for failing to effectively perform grants management duties. Compounding the accountability problem, agency leadership has not always emphasized the importance of project officers’ grants management duties. EPA’s recently issued policies on competition and oversight and a 5-year grants management plan to address its long-standing grants management problems are promising and focus on the major management challenges, but these policies and plan require strengthening, enhanced accountability, and sustained commitment to succeed. EPA’s competition policy shows promise but requires a major cultural shift. In September 2002, EPA issued a policy to promote competition in grant awards by requiring that most discretionary grants be competed. The policy also promotes widespread solicitation for competed grants by establishing specific requirements for announcing funding opportunities in, for example, the Federal Register and on Web sites. This policy should encourage selection of the most qualified applicants. However, the competition policy faces implementation barriers because it represents a major cultural shift for EPA staff and managers, who have had limited experience with competition, according to EPA’s Office of Grants and Debarment. The policy requires EPA officials to take a more planned, rigorous approach to awarding grants. That is, EPA staff must determine the evaluation criteria and ranking of these criteria for a grant, develop the grant announcement, and generally publish it at least 60 days before the application deadline. Staff must also evaluate applications— potentially from a larger number of applicants than in the past—and notify applicants of their decisions. These activities will require significant planning and take more time than awarding grants noncompetitively. Oversight policy makes important improvements but requires strengthening to identify systemic problems. EPA’s December 2002 policy makes important improvements in oversight, but it still does not enable EPA to identify systemic problems in grants management. Specifically, the policy does not (1) incorporate a statistical approach to selecting grantees for review so EPA can project the results of the reviews to all EPA grantees, (2) require a standard reporting format for in-depth reviews so that EPA can use the information to guide its grants oversight efforts agencywide, and (3) maximize use of information in its grantee compliance database to fully identify systemic problems and then inform grants management officials about oversight areas that need to be addressed. Grants management plan will require strengthening, sustained commitment, and enhanced accountability. We believe that EPA’s grants management plan is comprehensive in that it focuses on the four major management challenges—grantee selection, oversight, environmental results, and resources—that we identified in our work. For the first time, EPA plans a coordinated, integrated approach to improving grants management. The plan is also a positive step because it (1) identifies goals, objectives, milestones, and resources to achieve the plan’s goals; (2) provides an accompanying annual tactical plan that outlines specific tasks for each goal and objective, identifies the person accountable for completing the task, and sets an expected completion date; (3) attempts to build accountability into grants management by establishing performance measures for each of the plan’s five goals; (4) recognizes the need for greater involvement of high-level officials in coordinating grants management throughout the agency by establishing a high-level grants management council to coordinate, plan, and set priorities for grants management; and (5) establishes best practices for grants management offices. According to EPA’s Assistant Administrator for Administration and Resources Management, the agency’s April 2003 5-year grants management plan is the most critical component of EPA’s efforts to improve its grants management. In addition to the goals and objectives, the plan establishes performance measures, targets, and action steps with completion dates for 2003 through 2006. EPA has already begun implementing several of the actions in the plan or meant to support the plan; these actions address previously identified problems. For example, EPA now posts its available grants on the federal grants Web site http://www.fedgrants.gov. In January 2004, EPA issued an interim policy to require that grant funding packages describe how the proposed project supports the goals of EPA’s strategic plan. Successful implementation of the new plan requires all staff—senior management, project officers, and grants specialists—to be fully committed to, and accountable for, grants management. Recognizing the importance of commitment and accountability, EPA’s 5-year grants management plan has as one of its objectives the establishment of clear lines of accountability for grants oversight. The plan, among other things, calls for (1) ensuring that performance standards established for grants specialists and project officers adequately address grants management responsibilities in 2004; (2) clarifying and defining the roles and responsibilities of senior resource officials, grant specialists, project officers, and others in 2003; and (3) analyzing project officers’ and grants specialists’ workload in 2004. In implementing this plan, however, EPA faces challenges to enhancing accountability. Although the plan calls for ensuring that project officers’ performance standards adequately address their grants management responsibilities, agencywide implementation may be difficult. Currently, project officers do not have uniform performance standards, according to officials in EPA’s Office of Human Resources and Organizational Services. Instead, each supervisor sets standards for each project officer, and these standards may not include grants management responsibilities. Once individual project officers’ performance standards are established for the approximately 1,800 project officers, strong support by managers at all levels, as well as regular communication on performance expectations and feedback, will be key to ensuring that staff with grants management duties successfully meet their responsibilities. Furthermore, it is difficult to implement performance standards that will hold project officers accountable for grants management because these officers have a variety of responsibilities and some project officers manage few grants, and because grants management responsibilities often fall into the category of “other duties as assigned.” Although EPA’s current performance management system can accommodate development of performance standards tailored to each project officer’s specific grants management responsibilities, the current system provides only two choices for measuring performance— satisfactory or unsatisfactory—which may make it difficult to make meaningful distinctions in performance. Such an approach may not provide enough meaningful information and dispersion in ratings to recognize and reward top performers, help everyone attain their maximum potential, and deal with poor performers. EPA will also have difficulty achieving the plan’s goals if all managers and staff are not held accountable for grants management. The plan does not call for including grants management standards in managers’ and supervisors’ agreements. In contrast, senior grants managers in the Office of Grants and Debarment as well as other Senior Executive Service managers have performance standards that address grants management responsibilities. However, middle-level managers and supervisors also need to be held accountable for grants management because they oversee many of the staff that have important grants management responsibilities. According to Office of Grants and Debarment officials, they are working on developing performance standards for all managers and supervisors with grants responsibilities. In November 2003, EPA asked key grants managers to review all performance standards and job descriptions for employees involved in grants management, including grants specialists, project officers, supervisors, and managers, to ensure that the complexity and extent of their grant management duties are accurately reflected. Further complicating the establishment of clear lines of accountability, the Office of Grants and Debarment does not have direct control over many of the managers and staff who perform grants management duties— particularly the approximately 1,800 project officers in headquarters and regional program offices. The division of responsibilities between the Office of Grants and Debarment and program and regional offices will continue to present a challenge to holding staff accountable and improving grants management, and will require the sustained commitment of EPA’s senior managers. If EPA is to better achieve its environmental mission, it must more effectively manage its grants—which account for more than half of its annual budget. While EPA’s new 5-year grants management plan shows promise, given EPA’s historically uneven performance in addressing its grants management challenges, congressional oversight is important to ensure that the Administrator of EPA, managers, and staff implement the plan in a sustained, coordinated fashion to meet the plan’s ambitious targets and time frames. To ensure that EPA’s recent efforts to address its grants management challenges are successful, in our August 2003 report, we recommended that the Administrator of EPA provide sufficient resources and commitment to meeting the agency’s grants management plan’s goals, objectives, and performance targets within the specified timeframes. Furthermore, to strengthen EPA’s efforts we recommended incorporating appropriate statistical techniques in selecting grantees for in-depth reviews; requiring EPA staff to use a standard reporting format for in-depth reviews so that the results can be entered into the grant databases and analyzed agencywide; developing a plan, including modifications to the grantee compliance database, to use data from its various oversight efforts—in-depth reviews, significant actions, corrective actions taken, and other compliance information—to fully identify systemic problems, inform grants management officials of areas that need to be addressed, and take corrective action as needed; modifying its in-depth review protocols to include questions on the status of grantees’ progress in measuring and achieving environmental outcomes; incorporating accountability for grants management responsibilities through performance standards that address grants management for all managers and staff in headquarters and the regions responsible for grants management and holding managers and staff accountable for meeting these standards; and evaluating the promising practices identified in the report and implementing those that could potentially improve EPA grants management. To better inform Congress about EPA’s achievements in improving grants management, we recommended that the Administrator of EPA report on the agency’s accomplishments in meeting the goals and objectives developed in the grants management plan and other actions to improve grants management, beginning with its 2003 annual report to Congress. EPA agreed with our recommendations and is in the process of implementing them as part of its 5-year grants management plan. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Committee may have. For further information, please contact John B. Stephenson at (202) 512- 3841. Individuals making key contributions to this testimony were Carl Barden, Andrea W. Brown, Christopher Murray, Paul Schearf, Rebecca Shea, Carol Herrnstadt Shulman, Bruce Skud, and Amy Webbink. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Environmental Protection Agency (EPA) has long faced problems managing its grants, which constitute over one-half of the agency's annual budget, or about $4 billion. EPA uses grants to implement its programs to protect human health and the environment and awards grants to thousands of recipients, including state and local governments, tribes, universities, and nonprofit organizations. EPA's ability to efficiently and effectively accomplish its mission largely depends on how well it manages its grants resources. This testimony, based on GAO's August 2003 report Grants Management: EPA Needs to Strengthen Efforts to Address Persistent Challenges, GAO-03-846 , focuses on the (1) major challenges EPA faces in managing its grants and how it has addressed these challenges in the past, and (2) extent to which EPA's recently issued policies and grants management plan address these challenges. EPA continues to face four key grants management challenges, despite past efforts to address them. These challenges are (1) selecting the most qualified grants applicants, (2) effectively overseeing grantees, (3) measuring the results of grants, and (4) effectively managing grant staff and resources. In the past, EPA has taken a series of actions to address these challenges by, among other things, issuing policies on competition and oversight, conducting training for project officers and nonprofit organizations, and developing a new data system for grants management. However, these actions had mixed results because of the complexity of the problems, weaknesses in design and implementation, and insufficient management attention. EPA's recently issued policies and a 5-year grants management plan to address longstanding management problems show promise, but these policies and plan require strengthening, enhanced accountability, and sustained commitment to succeed. EPA's September 2002 competition policy should improve EPA's ability to select the most qualified applicants by requiring competition for more grants. However, effective implementation of the policy will require a major cultural shift for EPA managers and staff because the competitive process will require significant planning and take more time than awarding grants noncompetitively. EPA's December 2002 oversight policy makes important improvements in oversight, but it does not enable EPA to identify systemic problems in grants management. For example, the policy does not incorporate a statistical approach to selecting grantees for review so that EPA can project the results of the reviews to all EPA grantees. Issued in April 2003, EPA's 5-year grants management plan does offer, for the first time, a comprehensive road map with objectives, goals, and milestones for addressing grants management challenges. However, in implementing the plan, EPA faces challenges in holding all managers and staff accountable for successfully fulfilling their grants management responsibilities. Without this accountability, EPA cannot ensure the sustained commitment needed for the plan's success. While EPA has begun implementing actions in the plan, GAO believes that, given EPA's historically uneven performance in addressing its grants challenges, congressional oversight is important to ensure that EPA's Administrator, managers, and staff implement the plan in a sustained, coordinated fashion to meet the plan's ambitious targets and time frames. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
As computer technology has advanced, the federal government has become increasingly dependent on computerized information systems to carry out operations and to process, maintain, and report essential information. Federal agencies rely on computer systems to transmit proprietary and other sensitive information, develop and maintain intellectual capital, conduct operations, process business transactions, transfer funds, and deliver services. Ineffective protection of these information systems and networks can impair delivery of vital services, and result in loss or theft of computer resources, assets, and funds; inappropriate access to and disclosure, modification, or destruction of sensitive information, such as personally identifiable information; disruption of essential operations supporting critical infrastructure, national defense, or emergency services; undermining of agency missions due to embarrassing incidents that erode the public’s confidence in government; use of computer resources for unauthorized purposes or to launch attacks on other systems; damage to networks and equipment; and high costs for remediation. Recognizing the importance of these issues, Congress enacted laws intended to improve the protection of federal information and systems. These laws include the Federal Information Security Modernization Act of 2014 (FISMA), which, among other things, authorizes the Department of Homeland Security (DHS) to (1) assist the Office of Management and Budget (OMB) with overseeing and monitoring agencies’ implementation of security requirements; (2) operate the federal information security incident center; and (3) provide agencies with operational and technical assistance, such as that for continuously diagnosing and mitigating cyber threats and vulnerabilities. The act also reiterated the 2002 FISMA requirement for the head of each agency to provide information security protections commensurate with the risk and magnitude of the harm resulting from unauthorized access, use, disclosure, disruption, modification, or destruction of the agency’s information or information systems. In addition, the act continues the requirement for federal agencies to develop, document, and implement an agency-wide information security program. The program is to provide security for the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source. Risks to cyber-based assets can originate from unintentional or intentional threats. Unintentional threats can be caused by, among other things, natural disasters, defective computer or network equipment, software coding errors, and the actions of careless or poorly trained employees. Intentional threats include both targeted and untargeted attacks from a variety of sources, including criminal groups, hackers, disgruntled employees and other organizational insiders, foreign nations engaged in espionage and information warfare, and terrorists. These adversaries vary in terms of their capabilities, willingness to act, and motives, which can include seeking monetary or personal gain or pursuing a political, economic, or military advantage. For example, organizational insiders can pose threats to an organization since their position within the organization often allows them to gain unrestricted access and cause damage to the targeted system, steal system data, or disclose sensitive information without authorization. The insider threat includes inappropriate actions by contractors hired by the organization, as well as careless or poorly trained employees. As we reported in February 2015, since fiscal year 2006, the number of information security incidents affecting systems supporting the federal government has steadily increased each year: rising from 5,503 in fiscal year 2006 to 67,168 in fiscal year 2014, an increase of 1,121 percent. Furthermore, the number of reported security incidents involving PII at federal agencies has more than doubled in recent years—from 10,481 incidents in fiscal year 2009 to 27,624 incidents in fiscal year 2014. (See fig 1.) These incidents and others like them can adversely affect national security; damage public health and safety; and lead to inappropriate access to and disclosure, modification, or destruction of sensitive information. Recent examples highlight the impact of such incidents: In June 2015, the Office of Personnel Management reported that an intrusion into its systems affected the personnel records of about 4.2 million current and former federal employees. The Director stated that a separate but related incident involved the agency’s background investigation systems and compromised background investigation files for 21.5 million individuals. In June 2015, the Commissioner of the Internal Revenue Service testified that unauthorized third parties had gained access to taxpayer information from its “Get Transcript” application. According to officials, criminals used taxpayer-specific data acquired from non-department sources to gain unauthorized access to information on approximately 100,000 tax accounts. This data included Social Security information, dates of birth, and street addresses. In an August 2015 update, the agency reported this number to be about 114,000 and that an additional 220,000 accounts had been inappropriately accessed, which brings the total to about 330,000 accounts. In April 2015, the Department of Veterans Affairs’ Office of Inspector General reported that two contractors had improperly accessed the agency’s network from foreign countries using personally owned equipment. In February 2015, the Director of National Intelligence stated that unauthorized computer intrusions were detected in 2014 on the networks of the Office of Personnel Management and two of its contractors. The two contractors were involved in processing sensitive PII related to national security clearances for federal employees. In September 2014, a cyber intrusion into the United States Postal Service’s information systems may have compromised PII for more than 800,000 of its employees. In October 2013, a wide-scale cybersecurity breach involving a U.S. Food and Drug Administration system occurred that exposed the PII of 14,000 user accounts. Given the risks posed by cyber threats and the increasing number of incidents, it is crucial that federal agencies take appropriate steps to secure their systems and information. We and agency inspectors general have identified numerous weaknesses in protecting federal information and systems. Agencies continue to have shortcomings in assessing risks, developing and implementing security controls, and monitoring results. Specifically, for fiscal year 2014, 19 of the 24 federal agencies covered by the Chief Financial Officers Act reported that information security control deficiencies were either a material weakness or a significant deficiency in internal controls over their financial reporting. Moreover, inspectors general at 23 of the 24 agencies cited information security as a major management challenge for their agency. As we reported in September 2015, for fiscal year 2014, most of the 24 agencies had weaknesses in the five major categories of information system controls. These control categories are: (1) access controls, which limit or detect access to computer resources (data, programs, equipment, and facilities), thereby protecting them against unauthorized modification, loss, and disclosure; (2) configuration management controls, intended to prevent unauthorized changes to information system resources (for example, software programs and hardware configurations) and assure that software is current and known vulnerabilities are patched; (3) segregation of duties, which prevents a single individual from controlling all critical stages of a process by splitting responsibilities between two or more organizational groups; (4) contingency planning, which helps avoid significant disruptions in computer-dependent operations; and (5) agencywide security management, which provides a framework for ensuring that risks are understood and that effective controls are selected, implemented, and operating as intended. (See fig. 2.) Access controls: For fiscal year 2014, we, agencies, and inspectors general reported weaknesses in the electronic and physical controls to limit, prevent, or detect inappropriate access to computer resources (data, equipment, and facilities), thereby increasing their risk of unauthorized use, modification, disclosure, and loss. Access controls involve the six critical elements described in table 1. For fiscal year 2014, 12 agencies had weaknesses reported in protecting their networks and system boundaries. For example, the access control lists on one agency’s firewall did not prevent traffic coming or initiated from the public Internet protocol addresses of a contractor site and a U.S. telecom corporation from entering its network. Additionally, 20 agencies, including DHS, had weaknesses reported in their ability to appropriately identify and authenticate system users. To illustrate, agencies had weak password controls, such as using system passwords that had not been changed from the easily guessable default passwords or did not expire. Eighteen agencies, including DHS, had weaknesses reported in authorization controls for fiscal year 2014. For example, one agency had not consistently or in a timely manner removed, transferred, and/or terminated employee and contractor access privileges from multiple systems. Another agency also had granted access privileges unnecessarily, which sometimes allowed users of an internal network to read and write files containing sensitive system information. In fiscal year 2014, 4 agencies had weaknesses reported in the use of encryption for protecting data. In addition, DHS and 18 other agencies had weaknesses reported in implementing an effective audit and monitoring capability. For instance, one agency did not sufficiently log security-relevant events on the servers and network devices of a key system. Moreover, 10 agencies, including DHS, had weaknesses reported in their ability to restrict physical access or harm to computer resources and protect them from unauthorized loss or impairment. For example, a contractor of an agency was granted physical access to a server room without the required approval of the office director. Configuration management: For fiscal year 2014, 22 agencies, including DHS, had weaknesses reported in controls that are intended to ensure that only authorized and fully tested software is placed in operation, software and hardware is updated, information systems are monitored, patches are applied to these systems to protect against known vulnerabilities, and emergency changes are documented and approved. For example, 17 agencies, including DHS, had weaknesses reported with installing software patches and implementing current versions of software in a timely manner. Segregation of duties: Fifteen agencies, including DHS, had weaknesses in controls for segregation of duties. These controls are the policies, procedures, and organizational structure that help to ensure that one individual cannot independently control all key aspects of a computer-related operation and thereby take unauthorized actions or gain unauthorized access to assets or records. For example, a developer from one agency had been authorized inappropriate access to the production environment of the agency’s system. Continuity of operations: DHS and 17 other agencies had weaknesses reported in controls for their continuity of operations practices for fiscal year 2014. Specifically, 16 agencies did not have a comprehensive contingency plan. For example, one agency’s contingency plans had not been updated to reflect changes in the system boundaries, roles and responsibilities, and lessons learned from testing contingency plans at alternate processing and storage sites. Additionally, 15 agencies had not regularly tested their contingency plans. Security management: For fiscal year 2014, DHS and 22 other agencies had weaknesses reported in security management, which is an underlying cause for information security weaknesses identified at federal agencies. An agencywide security program, as required by FISMA, provides a framework for assessing and managing risk, including developing and implementing security policies and procedures, conducting security awareness training, monitoring the adequacy of the entity’s computer-related controls through security tests and evaluations, and implementing remedial actions as appropriate. We have also identified inconsistencies with the government’s approach to cybersecurity, including the following: Overseeing the security controls of contractors providing IT services. In August 2014, we reported that five of six agencies we reviewed were inconsistent in overseeing assessments of contractors’ implementation of security controls. This was partly because agencies had not documented IT security procedures for effectively overseeing contractor performance. In addition, according to OMB, 16 of 24 agency inspectors general determined that their agency’s program for managing contractor systems lacked at least one required element. Responding to cyber incidents. In April 2014, we reported that the 24 agencies did not consistently demonstrate that they had effectively responded to cyber incidents. Specifically, we estimated that agencies had not completely documented actions taken in response to detected incidents reported in fiscal year 2012 in about 65 percent of cases. In addition, the 6 agencies we reviewed had not fully developed comprehensive policies, plans, and procedures to guide their incident response activities. Responding to breaches of PII. In December 2013, we reported that eight federal agencies had inconsistently implemented policies and procedures for responding to data breaches involving PII. In addition, OMB requirements for reporting PII-related data breaches were not always feasible or necessary. Thus, we concluded that agencies may not be consistently taking actions to limit the risk to individuals from PII- related data breaches and may be expending resources to meet OMB reporting requirements that provide little value. Over the last several years, we and agency inspectors general have made thousands of recommendations to agencies aimed at improving their implementation of information security controls. For example, we have made about 2,000 recommendations over the last 6 years. These recommendations identify actions for agencies to take in protecting their information and systems. To illustrate, we and inspectors general have made recommendations for agencies to correct weaknesses in controls intended to prevent, limit, and detect unauthorized access to computer resources, such as controls for protecting system boundaries, identifying and authenticating users, authorizing users to access systems, encrypting sensitive data, and auditing and monitoring activity on their systems. We have also made recommendations for agencies to implement their information security programs and protect the privacy of PII held on their systems. However, many agencies continue to have weaknesses in implementing these controls in part because many of these recommendations remain unimplemented. For example, about 42 percent of the recommendations we have made during the last 6 years remain unimplemented. Until federal agencies take actions to implement the recommendations made by us and the inspectors general—federal systems and information, as well as sensitive personal information about the public, will be at an increased risk of compromise from cyber-based attacks and other threats. In conclusion, the dangers posed by a wide array of cyber threats facing the nation are heightened by weaknesses in the federal government’s approach to protecting its systems and information. While recent government-wide initiatives, including the 30-day Cybersecurity Sprint, hold promise for bolstering the federal cybersecurity posture, it is important to note that no single technology or set of practices is sufficient to protect against all these threats. A “defense in depth” strategy that includes well-trained personnel, effective and consistently applied processes, and appropriately implemented technologies is required. While agencies have elements of such a strategy in place, more needs to be done to fully implement it and to address existing weaknesses. In particular, implementing our and agency inspectors general recommendations will strengthen agencies’ ability to protect their systems and information, reducing the risk of a potentially devastating cyber attack. Chairman Lankford, Chairman Perry, Ranking Members Heitkamp and Watson Coleman, and Members of the Subcommittees, this concludes my statement. I would be happy to answer your questions. If you have any questions about this statement, please contact Joel C. Willemssen, Managing Director, Information Technology Team, at (202) 512-6253 or [email protected]. Other staff members who contributed to this statement include Gregory C. Wilshusen, Director, Information Security Issues, IT, Larry Crosland (assistant director), Christopher Businsky, Nancy Glover, and Rosanna Guerrero. Critical Infrastructure Protection: Cybersecurity of the Nation’s Electricity Grid Requires Continued Attention, GAO-16-174T. Washington, D.C.: October 21, 2015. Maritime Critical Infrastructure Protection: DHS Needs to Enhance Efforts to Address Port Cybersecurity, GAO-16-116T. Washington, D.C.: October 8, 2015. Federal Information Security: Agencies Need to Correct Weaknesses and Fully Implement Security Programs, GAO-15-714. Washington, D.C.: September 29, 2015. Information Security: Cyber Threats and Data Breaches Illustrate Need for Stronger Controls across Federal Agencies. GAO-15-758T. Washington, D.C.: July 8, 2015. Cybersecurity: Recent Data Breaches Illustrate Need for Strong Controls across Federal Agencies. GAO-15-725T. Washington, D.C.: June 24, 2015. Cybersecurity: Actions Needed to Address Challenges Facing Federal Systems. GAO-15-573T. Washington, D.C.: April 22, 2015. Information Security: IRS Needs to Continue Improving Controls over Financial and Taxpayer Data. GAO-15-337. Washington, D.C.: March 19, 2015. Information Security: FAA Needs to Address Weaknesses in Air Traffic Control Systems. GAO-15-221. Washington, D.C.: January 29, 2015. Information Security: Additional Actions Needed to Address Vulnerabilities That Put VA Data at Risk. GAO-15-220T. Washington, D.C.: November 18, 2014. Information Security: VA Needs to Address Identified Vulnerabilities. GAO-15-117. Washington, D.C.: November 13, 2014. Federal Facility Cybersecurity: DHS and GSA Should Address Cyber Risk to Building and Access Control Systems. GAO-15-6. Washington, D.C.: December 12, 2014. Consumer Financial Protection Bureau: Some Privacy and Security Procedures for Data Collections Should Continue Being Enhanced. GAO-14-758. Washington, D.C.: September 22, 2014. Healthcare.Gov: Information Security and Privacy Controls Should Be Enhanced to Address Weaknesses. GAO-14-871T. Washington, D.C.: September 18, 2014. Healthcare.Gov: Actions Needed to Address Weaknesses in Information Security and Privacy Controls. GAO-14-730. Washington, D.C.: September 16, 2014. Information Security: Agencies Need to Improve Oversight of Contractor Controls. GAO-14-612. Washington, D.C.: August 8, 2014. Information Security: FDIC Made Progress in Securing Key Financial Systems, but Weaknesses Remain. GAO-14-674. Washington, D.C.: July 17, 2014. Information Security: Additional Oversight Needed to Improve Programs at Small Agencies. GAO-14-344. Washington, D.C.: June 25, 2014. Maritime Critical Infrastructure Protection: DHS Needs to Better Address Port Cybersecurity. GAO-14-459. Washington, D.C.: June 5, 2014. Information Security: Agencies Need to Improve Cyber Incident Response Practices. GAO-14-354. Washington, D.C.: April 30, 2014. Information Security: SEC Needs to Improve Controls over Financial Systems and Data. GAO-14-419. Washington, D.C.: April 17, 2014. Information Security: IRS Needs to Address Control Weaknesses That Place Financial and Taxpayer Data at Risk. GAO-14-405. Washington, D.C.: April 8, 2014. Information Security: Federal Agencies Need to Enhance Responses to Data Breaches. GAO-14-487T. Washington, D.C.: April 2, 2014. Critical Infrastructure Protection: Observations on Key Factors in DHS’s Implementation of Its Partnership Model. GAO-14-464T. Washington, D.C.: March 26, 2014. Information Security: VA Needs to Address Long-Standing Challenges. GAO-14-469T. Washington, D.C.: March 25, 2014. Critical Infrastructure Protection: More Comprehensive Planning Would Enhance the Cybersecurity of Public Safety Entities’ Emerging Technology. GAO-14-125. Washington, D.C.: January 28, 2014. Computer Matching Act: OMB and Selected Agencies Need to Ensure Consistent Implementation. GAO-14-44. Washington, D.C.: January 13, 2014. Information Security: Agency Responses to Breaches of Personally Identifiable Information Need to Be More Consistent. GAO-14-34. Washington, D.C.: December 9, 2013. Federal Information Security: Mixed Progress in Implementing Program Components; Improved Metrics Needed to Measure Effectiveness. GAO-13-776. Washington, D.C.: September 26, 2013. Communications Networks: Outcome-Based Measures Would Assist DHS in Assessing Effectiveness of Cybersecurity Efforts. GAO-13-275. Washington, D.C.: April 10, 2013. Information Security: IRS Has Improved Controls but Needs to Resolve Weaknesses. GAO-13-350. Washington, D.C.: March 15, 2013. Cybersecurity: A Better Defined and Implemented National Strategy is Needed to Address Persistent Challenges. GAO-13-462T. Washington, D.C.: March 7, 2013. Cybersecurity: National Strategy, Roles, and Responsibilities Need to Be Better Defined and More Effectively Implemented. GAO-13-187. Washington, D.C.: February 14, 2013. Information Security: Federal Communications Commission Needs to Strengthen Controls over Enhanced Secured Network Project. GAO-13-155. Washington, D.C.: January 25, 2013. Information Security: Actions Needed by Census Bureau to Address Weaknesses. GAO-13-63. Washington, D.C.: January 22, 2013. Information Security: Better Implementation of Controls for Mobile Devices Should Be Encouraged. GAO-12-757. Washington, D.C.: September 18, 2012. Mobile Device Location Data: Additional Federal Actions Could Help Protect Consumer Privacy. GAO-12-903. Washington, D.C.: September 11, 2012. Medical Devices: FDA Should Expand Its Consideration of Information Security for Certain Types of Devices. GAO-12-816. Washington, D.C.: August 31, 2012. Privacy: Federal Law Should Be Updated to Address Changing Technology Landscape. GAO-12-961T. Washington, D.C.: July 31, 2012. Information Security: Environmental Protection Agency Needs to Resolve Weaknesses. GAO-12-696. Washington, D.C.: July 19, 2012. Cybersecurity: Challenges in Securing the Electricity Grid. GAO-12-926T. Washington, D.C.: July 17, 2012. Electronic Warfare: DOD Actions Needed to Strengthen Management and Oversight. GAO-12-479. Washington, D.C.: July 9, 2012. Information Security: Cyber Threats Facilitate Ability to Commit Economic Espionage. GAO-12-876T. Washington, D.C.: June 28, 2012. Prescription Drug Data: HHS Has Issued Health Privacy and Security Regulations but Needs to Improve Guidance and Oversight. GAO-12-605. Washington, D.C.: June 22, 2012. Cybersecurity: Threats Impacting the Nation. GAO-12-666T. Washington, D.C.: April 24, 2012. Management Report: Improvements Needed in SEC’s Internal Control and Accounting Procedure. GAO-12-424R. Washington, D.C.: April 13, 2012. IT Supply Chain: National Security-Related Agencies Need to Better Address Risks. GAO-12-361. Washington, D.C.: March 23, 2012. Information Security: IRS Needs to Further Enhance Internal Control over Financial Reporting and Taxpayer Data. GAO-12-393. Washington, D.C.: March 16, 2012. Cybersecurity: Challenges in Securing the Modernized Electricity Grid. GAO-12-507T. Washington, D.C.: February 28, 2012. Critical Infrastructure Protection: Cybersecurity Guidance is Available, but More Can Be Done to Promote Its Use. GAO-12-92. Washington, D.C.: December 9, 2011. Cybersecurity Human Capital: Initiatives Need Better Planning and Coordination. GAO-12-8. Washington, D.C.: November 29, 2011. Information Security: Additional Guidance Needed to Address Cloud Computing Concerns. GAO-12-130T. Washington, D.C.: October 6, 2011. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Effective information security for federal computer systems and databases is essential to preventing the loss of resources; the unauthorized or inappropriate use, disclosure, or alteration of sensitive information; and the disruption of government operations. Since 1997, GAO has designated federal information security as a government-wide high-risk area, and in 2003 expanded this area to include computerized systems supporting the nation's critical infrastructure. Earlier this year, in GAO's high-risk update, the area was further expanded to include protecting the privacy of personal information that is collected, maintained, and shared by both federal and nonfederal entities. This statement summarizes threats and information security weaknesses in federal systems. In preparing this statement, GAO relied on its previously published work in this area. Federal systems face an evolving array of cyber-based threats. These threats can be unintentional—for example, from software coding errors or the actions of careless or poorly trained employees; or intentional—targeted or untargeted attacks from criminals, hackers, adversarial nations, terrorists, disgruntled employees or other organizational insiders, among others. These concerns are further highlighted by recent incidents involving breaches of sensitive data and the sharp increase in information security incidents reported by federal agencies over the last several years, which have risen from 5,503 in fiscal year 2006 to 67,168 in fiscal year 2014 (see figure). Security control weaknesses place sensitive data at risk. GAO has identified a number of deficiencies at federal agencies that pose threats to their information and systems. For example, agencies, including the Department of Homeland Security, have weaknesses with the design and implementation of information security controls, as illustrated by 19 of 24 agencies covered by the Chief Financial Officers Act declaring cybersecurity as a significant deficiency or material weakness for fiscal year 2014. In addition, most of the 24 agencies continue to have weaknesses in key controls such as those for limiting, preventing, and detecting inappropriate access to computer resources and managing the configurations of software and hardware. Until federal agencies take actions to address these weaknesses—including implementing the thousands of recommendations GAO and agency inspectors general have made—federal systems and information will be at an increased risk of compromise from cyber-based attacks and other threats. Over the past 6 years, GAO has made about 2,000 recommendations to improve information security programs and associated security controls. Agencies have implemented about 58 percent of these recommendations. Further, agency inspectors general have made a multitude of recommendations to assist their agencies. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Under the current LCS program, two shipyards are building an equal number of two different versions of the LCS seaframe: Lockheed Martin builds the Freedom variant at Fincantieri Marinette Marine in Marinette, Wisconsin, and Austal USA builds the Independence variant in Mobile, Alabama. Table 1 shows the status of LCS seaframe acquisition, including the LCS with minor modifications, referred to as a frigate. When the Navy first conceived of the LCS in the early 2000s, the concept was that two shipbuilders would build prototypes based on commercial designs. The Navy planned to experiment with these ships to determine its preferred design variant. This experimentation strategy was subsequently abandoned. The Navy determined that, based on cost considerations, it would be impractical to have the two competing shipyards build only one or two ships and then wait for the Navy to complete the period of experimentation before awarding additional contracts. Instead, the Navy opted to continue funding additional seaframes without having completed the planned period of discovery and learning. The Navy has made several other revisions to the LCS acquisition strategy over time, some in response to direction from the Office of the Secretary of Defense. These have included changes over time regarding whether the Navy would downselect to one seaframe design. Although it might be expected that a new acquisition concept would require some adjustments over time, the LCS program has evolved significantly since it began, as shown in figure 1. As figure 1 indicates, the Navy now plans to buy LCS with minor modifications, which it refers to as frigates. This change to the acquisition strategy followed an analysis in 2014 by a Navy task force (known as the Small Surface Combatant Task Force) that was completed in response to direction from the then Secretary of Defense to identify options for a more capable small surface combatant. In seeking a frigate concept that would improve upon the capabilities provided by LCS, the Navy selected an LCS concept—referred to as a minor modified LCS. This concept, which Navy leadership believed would offer cost, schedule, and shipbuilding advantages, also was assessed as the least capable option considered for the LCS successor. The Navy has noted that the selected design provides some improvements, such as multi-mission and over-the-horizon missile capabilities, at a relatively lower cost than other options by leveraging the existing LCS shipyards and vendors. However, the Navy’s chosen frigate design will presumably carry forward some limitations inherent to its LCS origins, such as space limitations and equipment that has posed maintenance and logistics challenges. The Navy’s Small Surface Combatant Task Force charged with exploring alternatives to the LCS presented Navy leadership with a number of options, from which the Navy chose the option of a minor modified LCS based on cost, schedule, and industrial base stability factors. As we found in June 2016, the task force concluded that the Navy’s desired capability requirements could not be met without major modifications to an LCS design or utilizing other non-LCS designs. When presented with this conclusion, senior Navy leadership directed the task force to explore what capabilities might be more feasible on a minor modified LCS. In response to this direction, the task force created two additional LCS options with minor modifications. These options provided a multi-mission capability instead of the single-mission capability of LCS and retained the modular mission package characteristic of the LCS program (i.e., ability to more readily swap mission systems in and out). In developing these alternatives, the task force also found that it was feasible to permanently install an over-the-horizon missile to offer longer range surface warfare capability, plus a lightweight towed torpedo countermeasure and multi- function towed array sonar to offer some anti-submarine warfare capability. However, these improvements would still need to be augmented by an LCS surface warfare or anti-submarine warfare mission package to provide the full suite of LCS capability. The task force found that it was not technically feasible to include additional vulnerability capabilities (i.e., capabilities to improve the ship’s ability to sustain battle damage and still perform its mission) beyond adding armor protection to some vital spaces. Task force documentation also stated that in developing these alternative LCS options with minor modifications, some capabilities, like speed, had to be traded. Ultimately, the Navy chose—and the Office of the Secretary of Defense approved—a frigate concept based on a minor modified LCS, despite the task force’s findings that it was the least capable small surface combatant option considered. Navy leadership indicated this decision was based on LCS’s relatively lower cost and quicker ability to field, as well as the ability to upgrade remaining LCS and maintain stability in the LCS industrial base and vendor supply chain. In selecting the minor modified LCS concept, the Navy has made trade- offs in refining the capabilities of the frigate, prioritizing lethality and survivability improvements. The Navy noted that as part of the refinement process, the frigate program office identified additional capacity in the LCS designs that has enabled improvements to the ship’s planned capabilities. In particular, the Navy stated that the program office determined that full surface warfare and anti-submarine warfare capabilities could be included in baseline frigate plans, as opposed to the partial capabilities that were found to be possible by the Small Surface Combatant Task Force analysis. Table 2 presents an overview of capability changes the Navy has planned for the frigate, as compared to LCS, and the expected effect of those changes. However, we found in June 2016 that the Navy’s planned frigate upgrades will not include significant improvements in certain survivability areas. Further, the Navy sacrificed capabilities that were prioritized by fleet operators. For example, when asked in engagement sessions by the Small Surface Combatant Task Force, fleet operators consistently prioritized a range of 4,000 nautical miles, but the selected LCS concept with minor modifications was noted to have a minimum range requirement of 3,000 nautical miles. The Navy asserted that it is working with the prospective frigate shipbuilders to achieve a range more consistent with the priorities of fleet operators. The Director, Operational Test and Evaluation (DOT&E) has noted that the Navy’s proposed frigate design is not substantially different from LCS and does not add much more redundancy or greater separation of critical equipment or additional compartmentation, making the frigate likely to be less survivable than the Navy’s previous frigate class. Additionally, the Navy plans to make some similar capability improvements to existing and future LCS, narrowing the difference between LCS and the frigate. As we found in June 2016, the proposed frigate will utilize the offensive anti- submarine or surface warfare capabilities that are already part of the LCS mission packages, so while the frigate will have multi-mission capability that LCS lacks, the capabilities of the individual mission packages will be consistent with what is available for LCS. Though specific details are classified, there are only a few areas where there are differences in frigate warfighting capability compared to the LCS. Since the frigate will be based on an LCS design, it will likely carry forward some LCS design limitations. For example, LCS is configured to support up to 98 personnel, including core and mission package crew and an aviation detachment. Navy officials have stated that the frigate is being designed for a crew of 130. However, given the space limitations on LCS and the fact that the frigate will be based on one of the two LCS designs, achieving this significant increase in crew size could prove challenging. Additionally, barring Navy-directed changes to key mechanical systems, the frigate will carry some of the more failure-prone LCS equipment, such as some propulsion equipment, and will likely carry some of the LCS- unique equipment that has challenged the Navy’s support and logistics chain. Current acquisition plans for the frigate require Congress and the Navy to make significant decisions and potential future commitments of about $9 billion—based on early budget estimates—without key program knowledge. The Navy plans to request authority from Congress in 2017 to use what the Navy refers to as a block buy approach for all 12 planned frigates and request funding for the lead frigate as part of the fiscal year 2018 budget request. Because of recent changes to the acquisition approach that hastened the frigate award, the decisions that Congress will be asked to make in 2017 will not be informed by realistic cost estimates or frigate-specific detail design knowledge that helps solidify cost and construction expectations. Further, Congress will not possess critical information on LCS performance in testing that would increase understanding of the operational capability of LCS, which provides the design foundation for the frigate. The Navy’s award decision planned for 2018 will be informed by formal cost estimate information, but like Congress, the Navy will lack detail design knowledge and have more limited information on LCS’s operational capability than would have been available for the previously planned fiscal year 2019 frigate award. And finally, the current and planned LCS construction demands at both LCS shipyards that extend into 2021 suggest no schedule imperative exists that would require the Navy to request or to receive authority in 2017 for the frigate or to award the lead ship in 2018 as currently planned. The frigate acquisition plan has undergone notable changes since late 2015, for various reasons. As it now stands, an accelerated schedule effectively prevents the Navy from being able to provide Congress with a current, formal cost estimate for the frigate—independently completed or otherwise—before Congress is asked to make significant commitments to the program. Navy officials previously stated that the frigate is expected to cost no more than 20 percent—approximately $100 million—more per ship than the average LCS seaframes, though this was an initial estimate. However, our recent work has shown that LCS under construction have exceeded contract cost targets, with the government responsible for paying for a portion of the cost growth. Regarding expected costs for the frigate, prior LCS context is important to consider. When faced with the prospect of a downselect to one LCS variant in 2010, the two shipbuilders provided competitive pricing that propelled the Navy to continue production at both shipyards. Those prices have not yet been achievable. According to frigate program officials, under the current acquisition approach, the Navy will award contracts in fiscal year 2017 to each of the current LCS contractors to construct one LCS with a block buy option for 12 additional LCS—not frigates. Then, the Navy plans to obtain proposals for frigate-specific design changes and modifications from both LCS contractors in late 2017 that will be used to upgrade the LCS options to frigates. The Navy intends to evaluate pricing and technical factors for the proposed frigate upgrade packages and award frigate construction to one contractor based on a best value determination. This frigate downselect to one of the LCS shipyards is planned to occur in summer 2018. Figure 2 illustrates how the Navy plans to modify the fiscal year 2017 LCS contract to convert the ships in the block buy options to frigates. Navy officials explained that the frigate acquisition plan changed substantially in response to a Secretary of Defense memorandum issued in December 2015 that directed the Navy to revise its LCS and frigate acquisition plans. This included direction to reduce the total number of LCS and frigates from 52 to 40, downselect to one ship design, and award the frigate in fiscal year 2019. The Navy subsequently revised its plans to include a downselect decision, but also decided to accelerate the award of the lead frigate from fiscal year 2019 to 2018 as a replacement for awarding a single LCS in 2018. Table 3 shows the changes that have occurred since that memorandum. A consequence of the Navy’s accelerated frigate schedule is increased risk to the government because it refigures a commitment to buy ships in advance of adequate knowledge—a continuation of premature commitments by the LCS program. The Navy plans to award frigate construction to one shipyard before detail design activities specific to the frigate begin, which—as we previously have found—can result in increased ship prices and reduced understanding of how design changes will affect ship construction costs. Detail design enables the shipbuilders to visualize spaces and test the design as the granularity of the design for individual units, or zones, of the ship comes into focus. The Navy had plans in 2015 to have each LCS shipyard conduct frigate detail design activities in fiscal year 2018. This improved understanding of the frigate design was then going to be available to support the Navy’s construction contracts to both shipyards for frigates in fiscal year 2019. However, as we noted above, the Navy changed course in response to direction from the Secretary of Defense and currently plans for a downselect award in 2018. The reduced contract award timeline led the Navy to abandon its plans to conduct detail design activities before contract award; the current plan is to begin detail design after the frigate downselect award and complete design activities before beginning construction. The Navy has noted that LCS’s design is already complete and many areas of the frigate will be common to LCS—greater than 60 percent according to the frigate program office. However, with no detail design activities specific to the frigate upgrades planned until after the frigate shipbuilder is chosen by the Navy, the procurement activities—including shipbuilder proposal development, the Navy’s completion of a construction cost estimate, and finalization of the target cost for constructing the lead frigate—will not be informed by a more complete understanding of the frigate-specific design. Our work on best practices for program cost estimates has found that over time, cost estimates become more certain as a program progresses—as costs are better understood and program risks identified. Further, we found in August 2016 that even Navy shipbuilders acknowledged the benefits of having detail design knowledge available to inform decisions. Specifically, the two shipbuilders for the Navy’s newest configuration of the Arleigh Burke class destroyers—DDG 51 Flight III—agreed that allowing more time for the design to mature, via detail design, would provide greater confidence in their understanding of the Flight III-specific design changes and how the changes will affect ship construction costs. By completing more detail design activities prior to procuring a ship, the Navy—and shipbuilders— are better positioned for procurement and construction. We also found in June 2016 and February 2005 that awarding a contract before detail design is completed—though common in Navy ship acquisitions—has resulted in increased ship prices. For example, the Navy negotiated target prices for construction of the lead San Antonio class ship (LPD 17) and the first two follow-on ships (LPD 18 and LPD 19) before detail design even began, preventing the Navy from leveraging information that would be gained during detail design when negotiating target prices for these three ships. In contrast, the Navy’s Virginia class and Columbia class submarine programs had or planned to have a high level of design complete prior to the award of the lead ship construction contract, thus enabling the government to benefit from the knowledge gained from detail design in negotiating prices for construction. Along with a shift away from detail design activities prior to the frigate award and a shortened time frame before the award, the Navy moved away from its planned government-driven design process to a less prescriptive contractor-driven design process, adding potential risk. This approach is similar to what the Navy used for the original LCS program, whereby the shipyards were given performance specifications and requirements and systems that would be provided by the government, but then selected the design and systems that they determined were best suited to fit their designs in a producible manner. Program officials told us that this new approach should yield efficiencies; however, history from LCS raises concern that this approach for the frigate similarly could lead to the ships having some non-standard equipment, with less commonality with LCS and the rest of the Navy’s ships. In addition to the prevailing cost and design unknowns that pose risk to the Navy’s accelerated frigate acquisition plans, uncertainties remain regarding the operational capabilities of LCS that are relevant to the frigate. Some testing of operational capability already has been performed for LCS seaframes and the surface warfare mission package; however, the Navy does not plan to demonstrate operational capability in initial operational test and evaluation for the final surface warfare mission package until 2018 or demonstrate operational capability through initial operational test and evaluation for the anti-submarine warfare mission package until 2019. Additionally, the Navy has not demonstrated that LCS will achieve its survivability requirements—the LCS program office is planning for the final survivability assessment report to be completed in fiscal year 2018. While preliminary results from full ship shock trials in 2016—live fire testing of the survivability of LCS and its subsystems against underwater shocks (i.e., explosions)—suggest some positive findings, DOT&E continues to have questions about LCS’s survivability against more significant underwater shocks. Comprehensive reporting on the results of shock testing is not expected until later in 2017, which should provide a better understanding of any issues with the seaframes’ response to underwater shock that have implications for the frigate design. In addition to shock trials, both LCS variants sustained some damage in trials completed in rough sea conditions. Although the Navy indicated that the results of these trials have been incorporated into the structural design of both prospective frigate variants, the Navy has not completed its analytical reports of these events. Results from air defense testing also indicate capability concerns, and both seaframe variants were found to have significant reliability and maintainability issues during several tests and trials. Further, DOT&E has expressed concern that LCS effectiveness with its mission packages remains undemonstrated, which means questions persist about the LCS’s ability to perform many of its missions. These unknowns, in turn, will be carried over to the frigate program until the mission package capabilities that will also be employed by the frigate are fully demonstrated on the LCS. DOD has made some progress with the frigate acquisition approach over the last year that is consistent with a recommendation we made in June 2016. Specifically, we recommended that the Secretary of Defense require that before a downselect decision is made for the frigate, the program must submit appropriate milestone documentation, such as an independent cost estimate and a plan to incorporate the frigate into DOD’s Selected Acquisition Reports that are provided to Congress. The frigate’s requirements have been finalized, with Joint Requirements Oversight Council approval received for its capabilities development document in 2016, and the Navy is in the process of establishing a service cost position. DOD’s Office of Cost Assessment and Program Evaluation also plans to complete an independent cost estimate in fiscal year 2018. Still, if current acquisition plans hold, the Navy will ask Congress to consider authorizing what the Navy calls a block buy of 12 frigates and funding the lead frigate when the fiscal year 2018 budget is proposed. This authorization decision involves potential future commitments of about $9 billion based on early budget estimates. As indicated in table 4, the Navy’s request for authority from Congress appears premature, since significant uncertainties will remain for the cost and design changes needed to turn an LCS into a frigate, and relevant questions regarding LCS operational capability will remain unresolved. For example, under the Navy’s current plans, no formal cost estimate is expected to be completed before Congress is asked to make such a decision. Our prior work on best practices in weapon system acquisition has emphasized the importance of attaining key knowledge regarding cost, design, and capability expectations before making major commitments. While a block buy contracting approach may provide cost savings and other benefits for an acquisition program, it also may present challenges, such as reduced funding flexibility. For example, the LCS block buy contracts provide that a failure to fully fund the purchase of a ship in a given year would make the contract subject to renegotiation. DOD has pointed to this as a risk that the contractors would demand higher prices if DOD deviated from the agreed to block buy plan. Thus, once the frigate block buy contract is authorized and funded, DOD and Congress may once again have a notable disincentive to take any action that might delay procurement. This has been the case with LCS, even when it became apparent that the program was underperforming. The existing and planned LCS construction workloads at both shipyards suggest that a request in 2017 to authorize the frigate (with the fiscal year 2018 budget request) may not only be premature, but also unnecessary. Although the Navy has argued that pausing LCS production would result in loss of production work and start-up delays to the frigate program, current schedule delays for LCS under construction and the projected schedules for the yet-to-be-awarded LCS show that both shipyards have substantial workloads remaining that could offset the need to award the frigate in 2018 as planned. The Navy’s concern about shipyard workload also does not account for the possibility of continued delays in the delivery of LCS. Deliveries of almost all LCS under contract at both shipyards (LCS 5-26) have been delayed by several months, and, in some cases, close to a year or longer. Despite having had 5 years of LCS construction to help stabilize ship delivery expectations, the program did not deliver four LCS in fiscal year 2016 as planned. As figure 3 depicts, delays that have occurred for previously funded ships have resulted in a construction workload that extends into fiscal year 2020. This prolonged workload, when combined with the two LCS awarded in 2016 and two more LCS that have been authorized by congressional conferees and the Navy plans to award in fiscal year 2017, takes construction at both shipyards into 2021. With 13 LCS in various phases of construction (LCS 9, 11-22) and 3 more (LCS 23, 24, and 26) set to begin construction later in fiscal year 2017, delaying a decision on the frigate until fiscal year 2019 would enable the Navy and the shipbuilders to improve knowledge on cost, design, and operational capability of LCS that relates directly to the frigate. This, in turn, would offer Congress an opportunity to be better informed on the expectations for the frigate before committing substantial taxpayer funds to this program. The Navy’s impending fiscal year 2018 budget request presents a key opportunity for Congress to affect the way forward for the frigate program by ensuring the Navy possesses sufficient knowledge on cost, design, and capability before authorizing an investment of a potential $9 billion for a program that has no current formal cost estimate—independent or otherwise, will not have begun key detail design activities, has significant unknowns in regards to operational performance of the ship upon which it will be based, and based on the existing and planned shipyard workloads, has no industrial base imperative to begin construction in the Navy’s planned time frame. The block buy pricing the Navy expects to receive from LCS contractors in 2017 will be for the basic LCS seaframes that the Navy has acknowledged do not meet its needs. As we stated above, the two LCS shipbuilders—when faced with the prospect of a downselect in 2010— provided competitive pricing that propelled the Navy to continue production at both shipyards. Those prices have not been shown to be achievable. Even if LCS prices offered once again appear favorable, the ships ultimately are intended to be frigates, and the upgrade cost—to be proposed by the shipyards later—is a significant unknown. A decision by Congress to authorize the block buy of 12 frigates is effectively the final decision for the entire planned buy of 40 LCS and frigates. According to the Navy’s approved acquisition strategy, the frigates would still require annual appropriations and Congress could thus conduct oversight of the program through that process; however, it will likely be more difficult to make decisions to reduce or delay the program should that become warranted, as the Navy may point to losses in favorable block buy prices, as has been done previously with LCS. We recognize that the Navy had to revise its frigate acquisition plans based on the Secretary of Defense’s direction to reduce quantities and select a single ship design. However, the direction did not necessitate an acceleration of the frigate procurement and the corresponding shift away from a planned approach that would have provided substantially improved cost, design, and capability information to inform the frigate acquisition decisions. Reverting back to a frigate award in fiscal year 2019 would provide time to complete realistic cost estimates, build detail design knowledge, and make significant progress in understanding the operational capability and limitations of LCS, upon which the frigate design will be based. To ensure sound frigate procurement decisions, Congress should consider not enacting authority pursuant to the Navy’s request for a block buy of 12 frigates in the fiscal year 2018 budget and consider delaying funding of the lead frigate until at least fiscal year 2019 when sufficient cost, design, and capability knowledge is expected to be available to inform decisions. To ensure the department and the shipbuilders have sufficient knowledge of the frigate’s anticipated cost and design during the procurement process, the Secretary of Defense should direct the Secretary of the Navy to delay frigate procurement plans and the award of the lead frigate contract until at least fiscal year 2019 when cost estimates will be completed, detail design could be underway, and significant progress will have been made in demonstrating through testing the operational capabilities of LCS that are relevant to the frigate. We provided a draft of this report to DOD for review and comment. Its written comments are reprinted in appendix I of this report. DOD partially concurred with our recommendation to delay procurement plans and the award of the lead frigate contract until sufficient cost, design, and capability knowledge is available to inform decisions. In its response, DOD acknowledged that the Navy’s final contract decision includes risks, but stated that it believes the current plan offers an acceptable tradeoff between technical and affordability risks. DOD highlighted two actions that it believes will allow the department to assess program risk before moving forward: (1) annual frigate program review activities in 2017 intended to ensure risks are understood prior to the release of the formal frigate request for proposals, and (2) the planned completion of an independent cost estimate in fiscal year 2018 by the Office of Cost Assessment and Program Evaluation, which is expected to inform a 2018 annual program review prior to a contract award. While these are positive oversight actions, the assessments of design risk and maturity for these reviews will lack any frigate-specific detail design information, which leads us to maintain that waiting until at least fiscal year 2019 to procure the first ship and to make decisions on future frigate procurements would provide DOD and Congressional decision-makers with a more comprehensive understanding of frigate cost, design, and capability expectations before making substantial commitments to the program. This lack of knowledge, coupled with the ongoing and planned LCS construction workload at both shipyards, present, in our view, a compelling rationale for delaying a frigate decision. DOD also separately provided technical comments on our draft report. We incorporated the comments as appropriate, such as to provide additional context in the report. In doing so, we found that the findings and message of our report remained the same. In some cases, the department’s suggestions or deletions were not supported by the preponderance of evidence or were based on a difference of opinion, rather than fact. In those instances, we did not make the suggested changes. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of the Navy, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Littoral Combat Ship and Frigate: Slowing Planned Frigate Acquisition Would Enable Better-Informed Decisions. GAO-17-279T. Washington, D.C.: December 8, 2016. Littoral Combat Ship and Frigate: Congress Faced with Critical Acquisition Decisions. GAO-17-262T. Washington, D.C.: December 1, 2016. Littoral Combat Ship: Need to Address Fundamental Weaknesses in LCS and Frigate Acquisition Strategies. GAO-16-356. Washington, D.C.: June 9, 2016. Littoral Combat Ship: Knowledge of Survivability and Lethality Capabilities Needed Prior to Making Major Funding Decisions. GAO-16-201. Washington, D.C.: December 18, 2015. Littoral Combat Ship: Navy Complied with Regulations in Accepting Two Lead Ships, but Quality Problems Persisted after Delivery. GAO-14-827. Washington, D.C.: September 25, 2014. Littoral Combat Ship: Additional Testing and Improved Weight Management Needed Prior to Further Investments. GAO-14-749. Washington, D.C.: July 30, 2014. Littoral Combat Ship: Deployment of USS Freedom Revealed Risks in Implementing Operational Concepts and Uncertain Costs. GAO-14-447. Washington, D.C.: July 8, 2014. Navy Shipbuilding: Opportunities Exist to Improve Practices Affecting Quality. GAO-14-122. Washington, D.C.: November 19, 2013. Navy Shipbuilding: Significant Investments in the Littoral Combat Ship Continue Amid Substantial Unknowns about Capabilities, Use, and Cost. GAO-13-738T. Washington, D.C.: July 25, 2013. Navy Shipbuilding: Significant Investments in the Littoral Combat Ship Continue Amid Substantial Unknowns about Capabilities, Use, and Cost. GAO-13-530. Washington, D.C.: July 22, 2013. Defense Acquisitions: Realizing Savings under Different Littoral Combat Ship Acquisition Strategies Depends on Successful Management of Risks. GAO-11-277T. Washington, D.C.: December 14, 2010. National Defense: Navy’s Proposed Dual Award Acquisition Strategy for the Littoral Combat Ship Program. GAO-11-249R. Washington, D.C.: December 8, 2010. Defense Acquisitions: Navy’s Ability to Overcome Challenges Facing the Littoral Combat Ship Will Determine Eventual Capabilities. GAO-10-523. Washington, D.C.: August 31, 2010. Littoral Combat Ship: Actions Needed to Improve Operating Cost Estimates and Mitigate Risks in Implementing New Concepts. GAO-10-257. Washington, D.C.: February 2, 2010. Best Practices: High Levels of Knowledge at Key Points Differentiate Commercial Shipbuilding from Navy Shipbuilding. GAO-09-322. Washington, D.C.: May 13, 2009. Defense Acquisitions: Overcoming Challenges Key to Capitalizing on Mine Countermeasures Capabilities. GAO-08-13. Washington, D.C.: October 12, 2007. Defense Acquisitions: Plans Need to Allow Enough Time to Demonstrate Capability of First Littoral Combat Ships. GAO-05-255. Washington, D.C.: March 1, 2005. Michele Mackin at (202) 512-4841 or [email protected]. In addition to the contact above, Diana Moldafsky, Assistant Director; Pete Anderson; Jacob Leon Beier; Laurier Fish; Kristine Hassinger; C. James Madar; Sean Merrill; LeAnna Parkey; Anne Stevens; and Robin Wilson made key contributions to this report. | The Navy envisioned a revolutionary approach for the LCS program: dual ship designs with interchangeable mission packages intended to provide mission flexibility. This approach has fallen short, with significant cost increases, schedule delays, and reduced capabilities—some of which have yet to be demonstrated. The LCS acquisition approach has changed several times. The latest change led to the frigate—a ship that involves minor modifications to an LCS design. The House report 114-537 for the National Defense Authorization Act for Fiscal Year 2017 included a provision for GAO to examine the Navy's plans for the frigate. This report examines the Navy's plans for the frigate acquisition as well as remaining opportunities for oversight. To conduct this work, GAO reviewed documentation and interviewed Department of Defense (DOD) officials, and leveraged prior GAO reports on shipbuilding and acquisition best practices. The Navy's current acquisition approach for its new frigate—a ship based on a Littoral Combat Ship (LCS) design with minor modifications—requires Congress to make significant program decisions and commitments in 2017 without key cost, design, and capability knowledge. In particular, the Navy plans to request authority from Congress in 2017 to pursue what the Navy calls a block buy of 12 planned frigates and funding for the lead ship, which the Navy intends to award in 2018. Approval of these plans would effectively represent the final decision for the entire planned buy of 40 LCS and frigates. According to the Navy's approved acquisition strategy, the frigates would still require annual appropriations, so Congress would maintain its oversight through its annual appropriation decisions; however, any decision to reduce or delay the program, should that become warranted, could nevertheless be more difficult as the Navy may point to losses in favorable block buy prices, as has been done previously with LCS. The Navy's impending request presents a key opportunity for Congress to affect the way forward for the frigate program by ensuring the Navy possesses sufficient knowledge on cost, design, and capability before authorizing an investment of a potential $9 billion for a program that • has no current formal cost estimate—independent or otherwise, • will not begin key detail design activities until late fiscal year 2018, • has significant unknowns in regards to operational performance of the ship upon which its design will be based, and • based on the existing and planned shipyard workloads, has no industrial base imperative to begin construction in the Navy's planned time frame. The Navy's previous frigate acquisition plans included achieving a higher degree of ship design knowledge before awarding the lead ship in fiscal year 2019, as the plans included significant detail design activities prior to contract award. As GAO has previously found, such an approach—which has been supported by shipbuilders—offers greater confidence in the understanding of design changes and how they will affect ship construction costs. Further, as GAO's work on best practices for program cost estimates suggests, the Navy's prior plans for frigate design efforts and an award in fiscal year 2019 would have provided more information on which to base a decision, including a better understanding of risks and costs. The previous plans also better aligned with LCS test plans to improve the department's understanding of the operational capability and limitations for each ship variant. This knowledge could then be used to inform the Navy's decision on which LCS-based design for the frigate it will pursue. In addition to the valuable knowledge to be gained by not pursuing the frigate in the planned 2018 time frame, the existing and planned LCS construction workload for both shipyards is another important factor to consider. Specifically, each shipyard has LCS construction demands that extend into 2021, suggesting no imperative for the Navy to award the frigate in 2018. Delaying the frigate award until at least fiscal year 2019—when more is known about cost, design, and capabilities—would enable better-informed decisions and oversight for this potential $9 billion taxpayer investment. Congress should consider not enacting authority pursuant to the Navy's request for a block buy of 12 frigates in fiscal year 2018 and delaying funding of the lead frigate until at least fiscal year 2019, when more information is available on the ship's cost, design, and capabilities. GAO also recommends that DOD delay its procurement plans until sufficient knowledge is attained. DOD partially concurred with the recommendation but is not planning to delay frigate procurement. GAO continues to believe the recommendation is valid. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
DLA is DOD’s combat support agency under the supervision, direction, authority, and control of the Under Secretary of Defense for Acquisition, Technology, and Logistics. DLA’s mission is to provide best-value logistics support to America’s armed forces, in peace and in war, around the clock, and around the world. In carrying out its mission, DLA manages inventory valued at about $83 billion, consisting of more than 5 million consumable (expendable) items, including commodities such as fuel, food, clothing and other textiles, medical supplies, industrial use items, and spare and repair parts supporting over 1,400 weapon systems. DLA also buys and distributes hardware and electronic items that are used in maintenance and repair of equipment and weapons systems. In fiscal years 2002 and 2003, DLA expenditures related to sales and services amounted to over $46.5 billion, including about $36 billion for commodity purchases and about $600 million for DRMS excess property disposal services. DLA and DRMS operate under the Defense-wide Working Capital Fund. DLA is financed through user charges to cover costs, and DRMS is financed through user charges and excess property and scrap sale proceeds. DLA activities related to this report fall into two main areas: (1) commodity acquisition and management and (2) excess property disposals by DRMS and DLA- managed supply distribution depots (referred to as DLA supply depots). DLA commodity acquisition and management functions discussed in this report are carried out by three Defense supply centers, which are located in Columbus, Ohio; Richmond, Virginia; and Philadelphia, Pennsylvania. The DLA acquisition process focuses on (1) the acquisition of inventory requisitioned by customers for immediate use and (2) routine inventory replenishment. Defense Supply Center item managers initiate commodity procurements based on military unit requirements for materiel and supplies and military unit requisitions (supply orders). Supply center item managers consolidate the requirements and work with buyers to procure requested items. Items for which there are immediate needs are delivered directly to a military unit by the commercial vendor, and items needed to support anticipated operations (referred to as the requirements objective) are stored at DLA supply depots for later issue. The DLA Defense Distribution Center uses a total of 26 DLA supply depots located throughout the United States and Europe, as well as in Guam and Kuwait, to store commodities and other items that are classified by over 5 million different NSNs. This inventory includes commodities, such as clothing and other textiles; electronics; industrial, general, and construction supplies; subsistence items; and medical supplies and equipment. Figure 1 illustrates the DLA commodity acquisition and distribution process. When there is an urgent customer requirement and items are on back order, DLA item managers or expediters may check DRMS excess property inventory and service-level inventory to locate available items to fill an order. The Federal Property and Administrative Services Act of 1949, as amended, places responsibility for the disposition of surplus government real and personal property with the General Services Administration (GSA), which has delegated responsibility for disposal of DOD property to the Secretary of Defense. In accordance with federal regulations governing property management and department policy in DOD 4160.21-M, Defense Materiel Disposition Manual, DOD agencies and military services are responsible for determining whether property they hold is considered excess. Federal regulations also require executive agencies to ensure that personal property not needed by their activity is offered for use elsewhere within the agency. In accordance with federal regulations, DOD 4160.21-M, chapter 5, calls for reutilization of excess property to the extent feasible to fill existing needs and to satisfy additional needs before initiating new procurement or repair. All DOD activities are required to screen available excess assets to identify items that could satisfy valid needs, and the military services have programs for reutilizing property by redistributing excess property across their units to meet ongoing operational needs. DLA has overall responsibility for property that is excess to military and DOD units. DLA has placed responsibility for excess property disposals with DRMS. When a military service or DOD agency has property that it no longer needs, it turns the property over to a DRMS field warehouse location—or reutilization facility—referred to as a DRMO. During fiscal year 2004, DRMS managed 93 DRMOs, including 39 central DRMOs, 54 satellite DRMOs, and 35 receipt in place locations referred to as RIPLs. Reported excess property turn-ins are entered into the DRMS Automated Information System (DAISY). DRMS then posts descriptive information about the excess property to a Web page that lists property that is available for reutilization by DOD units and specially designated programs, transfer to federal agencies, and donation to states. DRMS has two organizational elements that manage and oversee excess property disposals. DRMS National is responsible for daily operations inside the continental United States. DRMS International is responsible for daily DRMS activities located outside the continental United States. DRMS International has field offices in Belgium, Germany, Guam, Hawaii, Italy, Japan, Korea, Portugal, Spain, Thailand, Turkey, the United Arab Emirates, and the United Kingdom, and it supports the task force in the Balkans. During fiscal years 2002 and 2003, the military services, DLA supply depots, and DOD agencies turned in excess commodities with a reported acquisition value of approximately $31 billion and disposed of excess property valued at $18.6 billion. This property included everything from office equipment, medical supplies, and clothing to scrap from naval ships, military equipment, and hazardous materials. The condition of the property ranges from being well-used or damaged property that has little value to new, unused items that sometimes are still in the original manufacturer’s packaging. DRMS bills DOD units and other federal agencies for disposal services based on turn-in volume. DRMS bills the military services and other DOD agencies a prorated amount for disposal costs net of scrap and liquidation sale proceeds. Table 1 shows DRMS’s reported revenue for excess property disposal services, including billings to the military services. Turn-ins of excess property are reported on DOD Form 1348, Disposal Turn-in Document, using a hard copy form that accompanies physical turn- ins of property at DRMOs or electronic reporting. In accordance with DOD 4160.21-M, Materiel Disposition Manual, upon arrival at a DRMO, excess items are to be inspected and the item descriptions, quantities, condition codes, and demilitarization codes are to be verified. Based on the item type and condition, a decision is made as to whether the item should be made available for reutilization. For excess property in new, usable, or repairable condition, redistribution from one DOD unit to another allows the government to make full use of its resources, avoids unnecessary procurement of property, and results in economy and efficiency of operations. Transfers and donations of excess DOD property to special programs, federal agencies, and states help to conserve their budgetary resources. Unusable items are generally sold as scrap. Department policy in DOD 4160.21-M-1, Defense Demilitarization Manual, calls for identifying and controlling items that have a significant military or commercial technology application to prevent improper use or release of these items outside of DOD. DOD’s Demilitarization Manual establishes specific codes that are designed to indicate whether DOD property is available for reuse without restriction or whether specific restrictions apply, such as removal of classified components, destruction of sensitive military technology, or trade security control. Any residual excess property that is not reused, transferred, or donated may be sold as scrap or sent to a landfill or other appropriate site for final disposal. Figure 2 illustrates the excess property turn-in and disposal process. Excess DOD property is available for reutilization, transfer, and donation during a 49-day screening period following turn-in to DRMS. It may take up to a week to record excess property receipts into DRMO inventory. Once excess property receipts are recorded, DOD units and specially designated programs may screen for and select items for reutilization. Special programs consist of entities that directly support DOD’s mission, customers that have statutory authorization to receive excess DOD property, and customers that have been specially designated by DOD to receive excess property items. Special programs share screening priority with DOD, and DRMS accounts for special program requisitions of DOD excess property as DOD reutilization. A description of the special programs is included in appendix IV. If excess property is still available after the DOD and special program screening period (the end of the first 21 days), the property is made available for transfer to other federal agencies through the GSA Federal Disposal System (FEDS) Web site known as GSAXcess for a 21-day period. Excess DOD property is available to DOD agencies during the GSA federal agency screening phase. DOD entities and others can specify their excess property needs on a “want list” and DAISY and GSA FEDS will send notices when such property becomes available. Property that is not reutilized by DOD or transferred to federal agencies after 42 days is considered surplus to the federal government and can be donated to state and local governments and other qualified organizations, or if not donated, it can be sold to the public after the 49-day screening period has expired. Government Liquidation, LLC is the DRMS commercial venture partner (contractor) for liquidation sales of excess property. Excess property at DRMOs is transferred to a liquidation contract sales site co-located with a DRMO. DLA supply depot excess property to be sold to the public is sent to one of two national liquidation sales locations. DLA supply depots located west of the Mississippi ship their excess property to the Huntsville, Alabama, liquidation sales location, and DLA supply depots located east of the Mississippi ship their excess property to the Norfolk, Virginia, liquidation sales location. Overseas, DRMOs sell excess property directly to the public. Our analysis of $18.6 billion in fiscal year 2002 and 2003 excess commodity disposal activity identified $2.5 billion in excess items that were reported to be in new, unused, and excellent condition (A condition). Although federal regulations and DOD policy require reutilization of excess property in good condition, to the extent possible, our analysis showed that DOD units only reutilized $295 million (12 percent) of these items. The remaining $2.2 billion (88 percent) of the $2.5 billion in disposals of A- condition excess commodities were not reutilized, but instead were transferred, donated, sold, or destroyed. About $1.6 billion of the $2.2 billion was transferred to other federal agencies and special programs, donated to states, or sold to the public for pennies on the dollar. DRMS sent the remaining $634 million to scrap and other contractors for disposal. We also found that DOD purchased at least $400 million of identical items during fiscal years 2002 and 2003, instead of reutilizing available excess items in A condition. However, our analysis of transaction data and our tests of controls for inventory accuracy indicate that the magnitude of waste and inefficiency could be much greater due to military units improperly downgrading condition codes of excess items that are in new, unused, and excellent condition to unserviceable and the failure to consistently record NSNs needed to identify like items. To illustrate continuing reutilization program inefficiencies and wasteful purchases, during fiscal year 2004 and the first quarter of fiscal year 2005, we obtained several new and unused excess DOD commodity items that were being purchased by DLA, were currently in use by the military services, or both. DRMS is responsible for disposing of unusable items, often referred to as “junk,” as well as facilitating the reutilization of usable items. As shown in figure 3, our analysis of DRMS data showed that $15.6 billion of the $18.6 billion in fiscal year 2002 and 2003 excess DOD commodity disposals consisted of items reported to be in unserviceable condition, including items needing repair, items that were obsolete, and items that were downgraded to scrap. The remaining $3 billion in excess commodity disposals consisted of items reported to be in serviceable condition, including $2.5 billion in excess commodities reported to be in A condition (new, unused, and excellent condition). Although DOD units reported that $15.6 billion (84 percent) of the excess commodities disposed of during fiscal years 2002 and 2003 were in unserviceable condition, DRMS data showed that DOD units had reutilized over $1.4 billion of these items—an indication that the items were, in fact, serviceable. Erroneous reporting of serviceable excess items as unserviceable hinders efforts at effective reutilization and can result in lower sales proceeds for items sold to the public. Although we do not know the extent of this problem, as discussed later, our statistical tests of DRMO inventory at five locations identified significant errors related to excess items that were coded as unserviceable when they were in fact in new, unused, and excellent condition. Our analysis of a reported $2.5 billion in fiscal years 2002 and 2003 disposal activity related to excess commodities reported to be in A condition showed that DOD units reutilized only $295 million of these items. As shown in figure 4, the remaining $2.2 billion (88 percent) were not reutilized, but instead were transferred to special programs and other federal agencies, donated to states, sold to the public, or destroyed through demilitarization and scrap contracts. As noted previously, DOD policy calls for the reutilization of excess property to the extent feasible and permits the disposal of unneeded items. However, the disposal of $2.2 billion in excess new, unused, and excellent condition items indicates that DOD bought more items than it needed. As shown in table 2, during fiscal years 2002 and 2003, DOD transfers of A- condition excess property valued at about $248 million benefited international governments; state and local governments; other federal agency programs; and specially designated programs such as DOD’s Humanitarian Assistance Program, foreign military assistance programs, and law enforcement agencies. Our overall analysis identified disposals of over 22 million new, unused, and excellent condition excess commodity items that were identical to items that DLA continued to purchase, stock, or both, resulting in waste of DOD resources. We investigated the details of more than a dozen of these disposal transactions. Table 3 highlights three examples from our case studies that illustrate waste related to excess commodities in new, unused, and excellent condition that were transferred or donated outside DOD at the same time DLA purchased identical items. In addition to instances where DOD units failed to reutilize excess commodities in A condition that were instead given away to other entities, we identified instances where DRMS destroyed these items. DRMS destroys or scraps items that are not reutilized or sold. As illustrated in figure 4, during fiscal years 2002 and 2003, DRMS destroyed, scrapped, or used hazardous materials contractors to dispose of excess commodities valued at about $634 million—about 25 percent of the $2.5 billion reported acquisition value for disposals of excess commodities in new, unused, and excellent condition. The majority of these items—items valued at $473 million—were military technology items, such as circuit cards, power supplies, and aircraft parts, that are required to be destroyed or demilitarized pursuant to national security guidelines when they are no longer needed by DOD. Some of the destroyed items had remained in supply inventory for many years and had become obsolete. However, we found several instances where items that were destroyed were still being purchased, used, or both by military units. The following examples illustrate the types of A-condition excess items that were destroyed. Destruction of excess items that required demilitarization. Examples of excess A-condition items that were destroyed pursuant to demilitarization requirements included 2,390 aircraft parts valued at $9,119,876, such as rotary wing blades, rotary rudders, windshield panels, fuel tanks, and pilot protection armor; 34,070 circuit cards valued at $73,666,720, including 88 circuit cards related to one NSN valued at $265,565; 1,604 radio sets valued at $10,247,110; 477 power supply units valued at $3,385,580; and 3 plasma display units valued at $263,151. Our case study investigations showed instances where power supplies and circuit cards that were still being purchased by DLA, stocked and issued to military units, or both were sent to a DRMO rather than being returned to supply inventory. For example, we found that the Army’s Tank-Automotive and Armament Command turned in 14 excess circuit card assemblies valued at $7,806 on May 29, 2003, because the Army had directed the retirement of its AH-1 Cobra and UH-1 Huey helicopters. However, the Navy and some foreign countries have continued to use these helicopters. The circuit cards are used in the M136 Helmet Sight, a heads-up display, on the Cobra Helicopter. The heads-up display permits a pilot to aim the helicopter’s rockets and the fixed forward firing gun. The circuit cards were advertised for reutilization to DOD and foreign military sales customers. Because they were not selected for reutilization within the 49- day screening period, they were sent to a demilitarization contractor on June 8, 2004, for destruction by thermal reduction. Destruction of excess A-condition commodity items as scrap. DRMS also scrapped excess A-condition commodities valued at about $144 million during fiscal years 2002 and 2003 that did not require demilitarization. Normally, these items are transferred, donated, or sold if they are not selected for reutilization within DOD. However, items that are not selected for reutilization or transferred, donated, or sold are scrapped. For example, DRMS scrapped excess new and unused items, such as the following: 340 computers with a reported acquisition value of $2,929,539, 2,440 bunk beds valued at $341,600, 29 simulators valued at $1,995,500, 567 power supplies valued at $1,683,211, and 29 teleprinters valued at $901,099. As noted in figure 4, 53 percent, or $1.3 billion of the total $2.5 billion in fiscal year 2002 and 2003 A-condition excess commodity turn-ins, was sold to the public. Although liquidation sales of excess commodities are an appropriate method of disposal when items cannot be reutilized, liquidation sales of items that are in new, unused, and excellent condition that could have been reutilized represent significant waste and inefficiency. Our case study investigations of fiscal year 2002 and 2003 disposals of excess A-condition commodities found that DRMS sold numerous excess items at the same time DLA purchased identical items. Our analysis showed that DRMS received a total of about $48 million in fiscal year 2002 and 2003 liquidation sales revenue for property valued at $1.3 billion—an average of about 4 cents on the dollar. Liquidation contractor officials told us that about 80 percent of their revenue relates to the sale of items in good condition. Our analysis of fiscal year 2002 and 2003 DLA commodity purchases and DRMS excess property inventory data identified numerous instances in which the military services ordered and purchased items from DLA at the same time identical items—items with the same NSN—that were reported to be in new, unused, and excellent condition were available for reutilization. We found that DOD purchased at least $400 million of identical items during fiscal years 2002 and 2003 instead of using available excess A-condition items. The magnitude of unnecessary purchases could be much greater because NSNs needed to identify identical items were not recorded for all purchase and turn-in transactions. For example, we determined that DLA buyers and item managers did not record NSNs for 87 percent (about $4.9 billion) of the nearly $5.7 billion in medical commodity purchases by military units during fiscal years 2002 and 2003. Further, as discussed later in this report, improper downgrading of condition codes to unserviceable could also result in an understatement of the magnitude of unnecessary purchases. While our statistical tests found a few instances of inaccurate serviceable condition codes, most condition code errors related to the improper downgrading of condition to unserviceable. Figure 5 shows examples from our analysis of A-condition excess items that were available for reutilization at the time DLA purchased identical items. To determine whether the problems identified in our analysis of fiscal year 2002 and 2003 data were a continuing problem, we monitored DRMS commodity disposal activity in fiscal year 2004 and the first quarter of fiscal year 2005. We found that DOD continued to transfer, donate, and sell excess A-condition items instead of reutilizing them. To illustrate these problems we requisitioned several excess new and unused items at no cost and purchased other new and unused commodities at minimal cost. We based our case study selections on new, unused items that DOD continued to purchase. We inspected excess items or called warehouse personnel to confirm they were new and unused. We used FEDLOG data and interviewed supply inventory item managers to confirm that the items were still being purchased, used, or both by the military services. To illustrate waste and inefficiency associated with transfers and donations of excess A-condition commodities to entities outside of DOD, we used the GSA Federal Disposal System, available to all federal agencies, to requisition several new and unused excess DOD commodity items, including a medical instrument chest, two power supply units, and two circuit cards, at no charge. These items had an original DOD acquisition cost of $55,817, and we paid only $5 shipping cost to obtain all of them. We obtained these items from two DRMOs and a DLA supply depot. The following discussion presents the details of our case study requisitions. Medical instrument chest. We requisitioned at no cost a new, unused medical instrument chest with a reported acquisition cost of $784 from the Lewis DRMO in Fort Lewis, Washington. When we visited the Lewis DRMO to screen for and tag new, unused items, a DRMO official told us that about 20 percent of the Lewis DRMO receipts are new, unused items. The medical instrument chest that we obtained was one of 16 excess medical chests turned in by the Fort Lewis Army Medical Hospital on May 6, 2004. At the time of our requisition on June 2, 2004, the Army, Navy, and Air Force medical logistics commands were continuing to purchase these medical chests from DLA. The excess DOD medical instrument chest that we requisitioned is designed for maximum support of deployed medical personnel. For example, the chest is designed to store medical instruments and protect them during shipment as well as to provide shelves and tables for use during surgery and other medical procedures in the battlefield. Figure 6 is a photograph of the excess DOD medical instrument chest assembled for maximum use. Circuit cards. On September 7, 2004, we requisitioned two circuit cards with a total original acquisition cost of $8,684, from the Hill DRMO. We paid $5 shipping cost and received the circuit cards on September 27, 2004. Circuit cards are circuit boards consisting of a series of flat plastic or fiberglass layers (usually 2 to 10) that are glued together after a circuit has been etched in them. In a computer, a circuit card holds the integrated circuits and other electronic components that provide power to perform certain designated functions, such as computerized program functions or electronic communications functions. According to the Navy inventory item manager and the National Security Agency technical support team leader, the circuit cards that we obtained are used in secure satellite communications gear. The circuit cards that we obtained were turned in by the DLA supply depot in Ogden, Utah, as excess to Air Force needs in February 2004. The Navy item manager told us that although the circuit cards were no longer being purchased, they were still in active inventory and were still being used by some Navy units and foreign military sales customers at the time we obtained them. Our Chief Technologist inspected the circuit cards and confirmed that they included communications circuitry and were in new, unused condition. Figure 7 is a photograph of one of the circuit cards we requisitioned. Power supply units. We requisitioned, at no cost, two high-cost power supply units from the DLA supply depot in Norfolk, Virginia—one with a reported acquisition cost of $24,797 and another with a reported acquisition cost of $21,552—a total of $46,349. We received one power supply unit on September 30, 2004, and the other power supply unit on October 6, 2004. According to the manufacturer, these power supply units are part of a super-high-frequency electronics surveillance system, which is designed to listen and identify radio frequencies. The power supply units convert AC power to DC voltage to provide power to the assemblies inside the surveillance system. We contacted the Navy inventory control point program manager to inquire about the use of the power supply units that we had identified. The program manager explained that both of the power supply units are currently used in the electronic warfare system of the Seawolf fast attack nuclear submarine. The Navy official stated that although DLA is not currently purchasing these items due to a planned upgrade in technology, the Navy has a very small number of these power supply units in inventory and the items remaining at the DLA supply depot should not have been excessed because they may be needed before the technology upgrade is completed. Our Chief Technologist inspected the excess DOD power supply units we obtained and confirmed that they had never been used. Figure 8 is a photograph of one of the power supply units that we obtained. In addition to using the GSA process available to federal agencies to obtain excess DOD property at no cost, we also purchased, at minimal cost, several excess DOD commodity items in new and unused condition over the Internet at govliquidation.com—the DRMS liquidation contractor’s Web site. The items we purchased included tents, boots, three gasoline burners (stove/heating unit), a medical suction apparatus, and bandages and other medical supply items with a total reported acquisition cost of $12,310. We paid a total of $1,466 for these items, about 12 cents on the dollar, including buyer’s premium, tax, and shipping cost. The following examples illustrate the results of our case study investigations and purchases. New, unused extreme cold weather boots. On September 30, 2004, we purchased several pairs of excess new, unused extreme cold weather boots over the Internet at govliquidation.com. The sales advertisement listed an acquisition cost of $3,900 for approximately 30 pairs of the boots. We paid a total of $483, including buyer’s premium, tax, and transportation cost, to acquire the extreme cold weather boots. According to a Stockton DRMO official, the boots were found at the DRMO without identifying paperwork, and DRMO personnel entered them in excess property inventory in April 2004. The boots were advertised as being in H condition (unserviceable, condemned condition). However, the photograph on the govliquidation.com Web page showed that the manufacturer’s product label was still tied to the laces of the boots and that the soles of the boots had no wear, indicating that they had not been worn. When we received the boots on October 12, 2004, we determined that we had, in fact, purchased a total of 42 pairs of cold weather boots of which 37 pairs were in new, unused condition. We paid about $12 per pair for the 42 pairs of boots, which have a listed acquisition cost of $135 per pair. Shortly after we purchased the excess cold weather boots, the DLA item manager told us that she recently placed an order with the vendor to purchase 31,420 pairs of these same boots, including 1,360 of the sizes of boots that we purchased. Further, the DLA technician responsible for these boots told us that the boots have a shelf life of up to 15 years. According to the DLA technician, the boots should be inspected after the first 5 years and then inspected every 2 years after that for a total of six inspections in 15 years. After 15 years from date of manufacture these boots would have surpassed their useful life. All of the boots we purchased were less than 5 years old. The DLA technician told us that none of these boots have been recalled, and they are considered excellent boots that are rated to 60 degrees below zero. Figure 9 is a photograph of the new, unused excess DOD boots that we purchased. Shelter Half-tents. We purchased several new, unused shelter half-tents over the Internet from govliquidation.com on August 26, 2004. We paid $548, including buyer’s premium, tax, and shipping cost, to acquire the excess DOD shelter half-tents, which had a listed acquisition value of $2,122. Shelter half-tents can be carried by individual soldiers and must be joined together to form a tent that will house two soldiers. The tents were listed in H condition (unserviceable, condemned condition). However, the advertisement on the liquidation contractor’s Web page stated that some of the tents were new and in original boxes, and the photograph on the sales Web page showed that most of the tents were in the original manufacturer’s packages. Upon receipt of the tents, we determined that we had, in fact, purchased 21 new, unused tents and 6 additional tents that were used, but appeared to be in good condition. At the time we purchased the shelter half-tents, the DLA item manager told us that none remained in stock. DLA data showed that the Defense Supply Center, Philadelphia, placed an order for 35,000 of these tents at a cost of about $2.5 million. Figure 10 is a photograph of one of the new, unused excess DOD shelter half-tents that we purchased over the Internet at govliquidation.com. Gasoline burner units. On September 30, 2004, we purchased three new, unused excess DOD gasoline burner units over the Internet from govliquidation.com. We paid $164, including buyer’s premium, tax, and shipping cost, to acquire the gasoline burners, which had a listed acquisition value of $1,857. The gasoline burners, which were turned in as excess by the California Army National Guard in San Luis Obispo, California, were advertised as “still in box, have never been used.” According to the DLA item manager, a gasoline burner unit can be used on the battlefield as either a heat source or as a stove for cooking. The item manager told us that the units also could be used as stand-alone field/camping stoves, but would need a grate, or cooking surface over the burner. The item manager explained that DLA purchased thousands of these units several years ago, and they are continuing to be issued from supply inventory and used by deployed troops. According to item manager data, DOD units purchased 471 of these same gasoline burner units from DLA in fiscal year 2004. The item manager told us that there are currently 9,500 of these units in inventory and provided data that showed DLA has continued to issue gasoline burners to military units. Figure 11 is a photograph of one of the new, unused excess DOD gasoline burner units that we purchased over the Internet from govliquidation.com in September 2004. At the end of our audit in February 2005, we noted continuing liquidation sales of excess DOD gasoline burner units. Portable oropharyngeal suction apparatus. On October 7, 2004, we purchased a new, unused portable suction apparatus for the minimum bid of $35. We paid a total of $105 for the suction apparatus, including buyer’s premium, tax, and shipping, compared to the acquisition cost of $1,141. The suction apparatus runs on electrical or battery power and is designed for use in aspirating blood and other fluids in emergency treatment of unconscious or injured personnel in desert, tropic, or artic environments. The suction apparatus, which was turned in as excess by a U.S. Air Force Reserve unit at the March Air Reserve Base in Riverside, California, was coded as being in F condition (unserviceable, repairable condition). However, the photograph of the suction apparatus showed the tubing to be sealed in the original package—indicating that the suction apparatus had not been used. Documentation we obtained from the DLA item manager showed that during fiscal year 2004, DLA purchased 627 of these same suction apparatuses, with a total acquisition cost of $490,439, for issue to military units. Our in-house medical expert inspected the suction apparatus and confirmed that it had not been used. He said that the design has not changed for many years, and the only issue with regard to serviceability would be whether the battery needed to be replaced. We determined that the batteries in the unit that we purchased still had a charge, and the unit was operational. Figure 12 is a photograph of the new, unused excess DOD portable suction apparatus that we purchased. The $2.2 billion in DOD waste and inefficiency that we identified stemmed from management control breakdowns across DOD. We found key factors in the overall DRMS management control environment that contributed to waste and inefficiency in the reutilization program, including (1) unreliable excess property inventory data; (2) inadequate DRMS oversight, accountability, physical control, and safeguarding of property; and (3) outdated, nonintegrated excess inventory and supply systems. In addition, for many years, our audits of DOD inventory management have reported that continuing unresolved logistics management weaknesses have resulted in DOD purchasing more inventory than it needed. Our analysis of fiscal year 2002 and 2003 excess commodity turn-ins showed that $1.4 billion (40 percent) of the $3.5 billion of A-condition excess items consisted of new, unused DLA supply depot inventory. Our statistical tests of excess commodity inventory and our case studies, screening visits, and interviews lead us to conclude that unreliable data are a key cause of the ineffective excess property reutilization program. GAO’s internal control standards require assets to be periodically verified to control records. In addition, DRMS policy requires DRMO personnel to verify turn-in information, including item description, quantity, condition code, and demilitarization code, at the time excess property is received and entered into DRMO inventory. However, we found that DRMS management has not enforced this requirement. Further, Army, Navy, and Air Force officials told us that unreliable data are a disincentive to reutilization because of the negative impact on their operations. DLA item managers told us that because military units have lost confidence in the reliability of data on excess property reported by DRMS, for the most part, they have requested purchases of new items instead of reutilizing excess items. Military users also cited examples of damage to excess items during shipment that rendered the items unusable. In addition, other reutilization users advised us of problems related to differences in quantities and the types of items ordered and received that could have a negative impact on their operations. Our statistical tests found significant problems with controls for assuring the accuracy of excess property inventory. Overall error rates for the five DRMOs we tested ranged from 8 percent at one DRMO to 47 percent at another, and error rates for the five DLA supply depots we tested ranged from 6 percent to 16 percent, including errors related to physical existence of turn-ins and condition code. Our physical existence tests included whether a turn-in recorded in inventory could be physically located, timely recording of transactions, and verification of item description and quantity. Table 4 shows the overall results of our statistical sampling tests at five DRMOs and five DLA supply depots. The specific criteria we used to conclude on the effectiveness of DRMO and DLA depot inventory controls at the tested locations are included in appendix V. Key types of data reliability errors that we found include the following. Existence errors. Missing turn-ins in our statistical sample included entire turn-ins of excess commodity items, such as sleeping bags, cold weather clothing, wet weather parkas, chemical and biological protective suits, a computer, and monitors. DRMO officials could not locate documentation to show whether the missing turn-ins had been reutilized, transferred, sold, or destroyed. Because many items from our statistical sample could not be found, the issue of lost, missing, and stolen property is significant, as discussed later. Quantity errors. Separate from missing turn-ins, quantity errors involved items that exceeded or fell short of quantities recorded on a turn-in transaction. Shortages represent items that appeared to be available but were missing. Because DRMO personnel do not always verify quantities at the time excess items are received and recorded into excess inventory, they cannot determine whether missing quantities are errors or if they represent items that are lost, missing, or stolen. Quantity shortages included cold weather, wet weather, and camouflage clothing; field packs; chemical and biological protective suits and gloves; and computer keyboards. Lack of timely transaction recording. DRMO personnel did not always record transactions to reflect events, such as changes in warehouse location and shipments to customers or disposal contractors within 7 days. Based on our screening and inventory testing experience, when time is wasted looking for such items, customers can become frustrated, leading to possible loss of future orders. Excess property users told us that they spend a lot of time visiting DRMO warehouses to locate and inspect excess items before they submit requisitions for them. Inaccurate item descriptions. Our statistical sample identified several turn-in transactions involving items that were different from the types of items recorded in the inventory records. Item description errors included erroneous item names and stock numbers. For example, we found three instances at one DRMO where turn-ins of computer keyboards were listed in excess inventory records as speakers and one instance at another DRMO where speakers were recorded as keyboards. Our sample also identified one women’s coat and one men’s coat that were recorded in excess inventory as two women’s coats and items that were recorded as wet weather trousers and camouflage trousers when the turn-in boxes contained multiple items, including wet weather trousers and parkas, camouflage pants, shirts, and coats, and flyer’s coveralls. When batched items are recorded as one type of item, only the NSN for those items is listed in inventory. As a result, a customer could order what he or she believed to be the listed quantity of the named item but instead receive various quantities of multiple types of items. Inaccurate condition coding. Our statistical sample found condition code error rates that ranged from 5 percent at one DRMO to 22 percent at two other DRMOs that we tested. We based our determinations of condition coding accuracy on physical observation of condition with regard to the broad categories of serviceable and unserviceable rather than testing specific coding within these categories, which could have resulted in an even higher error rate. Our sample identified numerous examples of new, unused excess inventory items that were incorrectly coded as being in unserviceable condition, including cold weather boots, cold weather undershirts, military trousers, women’s blue dress uniforms, compressor parts kits, wet weather parkas, and fragment body armor. In addition to items in our statistical sample, we observed numerous other new, unused items in DRMO warehouses and at liquidation sales locations that were coded as unserviceable, including desert combat boots, camouflage clothing, computer equipment, and aircraft parts. Accurate condition codes are key to an effective excess property reutilization program because DOD units generally look for new, unused excess items for reutilization. We found that unreliable excess property inventory data are the result of breakdowns in controls for proper recording and verification of inventory transaction data. The control breakdowns we identified related to four major areas: (1) the failure of DRMO personnel to verify excess property turn-ins at the time they are received and entered into excess inventory records; (2) improper downgrading of condition codes by DOD units; (3) the inconsistent use of NSNs; and (4) human capital issues related to DRMO staffing and workload and military service procedures, training, and oversight of excess property reporting. Failure to verify turn-ins and correct errors. The errors in excess inventory identified in our statistical samples, screening observations, and case studies were caused by inaccurate turn-in documentation submitted by military unit turn-in generators and the failure of DRMO personnel to inspect excess items, verify turn-in documents, and correct identified errors. DRMS policy requires DRMO personnel to inspect excess items upon receipt and challenge or change incorrect data. However, DRMO personnel told us that they were not able to verify excess property receipts when faced with large turn-in volumes and processing backlogs. Further, a provision in this same policy allows DRMO managers who are faced with heavy turn-in volume to waive the requirement to verify quantity counts, if the time required to count the property is not justified, and instead use turn-in generator counts. The policy limits exceptions to (1) batched turn- ins of multiple types of items, (2) large quantities of small items in other than the original package, and (3) large quantities of items in the original package where box counts can be used. However, officials at two of the five DRMOs we tested—the DRMOs with the highest data reliability error rates—cited this policy and told us that they accept turn-in generator information and do not verify excess property turn-in data. In addition, our statistical sample identified one instance where DRMS headquarters officials did not provide guidance on how to correct erroneous turn-in documentation related to a June 30, 2004, Navy turn-in of six new, unused Level III biological safety cabinets with a total acquisition cost of $120,000. The Navy unit improperly used a local stock number (LSN) to describe the safety cabinets on the turn-in document and a demilitarization code that indicated there were no restrictions on the disposal of these items. However, Level III safety cabinets are subject to trade security controls, and therefore, they are required to be identified by an NSN or other information which accurately describes the item, the end item application, and the applicable demilitarization code. Although Norfolk DRMO personnel advised DRMS officials of the need to correct the turn-in document errors in July 2004, at the time we finalized our draft report in early February 2005, DRMS had not taken action to authorize the DRMO to correct these errors so that the safety cabinets could be identified for reutilization within DOD. Further, we found that as of the end of our audit in February 2005, the safety cabinets had not been posted to the DRMS reutilization Web page as excess property available for reutilization. Figure 13 shows a photograph of one of the Level III cabinets. Improper downgrading of condition codes. The incorrect recording of unserviceable condition codes for items that are in serviceable condition, particularly items in new, unused condition, makes it unlikely that they will be selected for reutilization. For example, all of the new, unused excess DOD commodity items that we purchased over the Internet were incorrectly coded as unserviceable. As noted previously in our case study discussions, all of the items that we purchased were items that military units continued to purchase, use, or both. As shown in table 5, our DRMO tests found that most errors related to items that were incorrectly reported to be in unserviceable condition. As shown in table 5, we found numerous instances where DOD units improperly downgraded the condition codes of items that were no longer serviceable to them, either because they did not want these items or because the items were being replaced by new technology, even though in many cases these items were new and unused. Our statistical tests and our case studies showed that many times the items that military units coded as unserviceable were serviceable and very adequate for use by others. Inconsistent recording of NSNs. The failure to consistently record NSNs to commodity purchase and excess inventory records prevents the identification of like items for reutilization and, therefore, may result in unnecessary purchases. Although DLA records NSNs for most purchases that are stored in DLA supply depot inventory, it does not record NSNs for items purchased from prime vendors for direct delivery to DOD customers. For example, as noted previously, we determined that DLA buyers and item managers did not record NSNs for 87 percent of the nearly $5.7 billion in medical commodity purchases by military units during fiscal years 2002 and 2003. According to DLA officials, prime vendor catalogs identify products by part number or model number rather than NSN. This issue will become more significant as DLA expands its use of prime vendors to other commodity groups. The failure to record NSNs to turn-in transactions prevents item managers from identifying these items for reutilization at the time purchase decisions are made. For example, our in-house scientists who often meet with DOD scientists at the U.S. Army Biological Warfare Research Center at the Dugway Proving Ground learned that the DOD scientists were planning to purchase a Level III safety cabinet and informed them of the availability of the six Level III safety cabinets at the Norfolk DRMO. The DOD scientists told us that they were unaware the Navy had excessed the safety cabinets and said that they could use all six of them. We subsequently confirmed that the DOD scientists at Dugway had requisitioned the six Level III safety cabinets for reutilization. Our analysis showed that LSNs were recorded for about 41 percent of fiscal year 2002 and 2003 excess property turn-ins. LSNs are appropriate identifiers for local purchases and one-of-a kind items. However, our statistical samples and case studies showed that military unit turn-in generators had recorded LSNs to items that should have been identified with NSNs to avoid the time and effort necessary to identify and record NSNs. For example, LSNs were recorded for excess military clothing in our Columbus DRMO sample and the cold weather boots that we purchased over the Internet even though these items have labels that showed the assigned NSNs. DOD has efforts under way to promote the use of unique product identifiers other than NSNs by commercial vendors and small business firms. Regardless of the mechanism used to identify standard items, to assure an effective excess property reutilization program, DOD will need to consistently record NSNs, product numbers, or other unique item identification in its purchase, supply, and excess inventory records. Human capital weaknesses. We found that human capital issues related to imbalances between staffing and workload at DRMOs and inadequate training of military turn-in generators contributed to unreliable data and associated waste and inefficiency. Based on our interviews of DRMO officials, our statistical tests of DRMO inventory, and our review of available DRMS workload data for the five DRMOs we tested, we concluded that data reliability was directly affected by the availability of DRMO staff qualified to process excess property receipts. For example, DRMS data for the last 8 months of fiscal year 2004 showed the three DRMOs we visited that attempted to verify turn-in documentation— Norfolk, Hill, and Stockton—experienced backlogs in receipt processing and significant use of overtime hours. In contrast, we found that the two DRMOs that did not verify receipts worked few, if any, overtime hours and had significantly fewer backlogs than the other three DRMOs. As noted previously, these two DRMOs also had high excess property inventory error rates. We also found a lack of detailed guidance on the proper assignment of condition codes. DRMS condition code guidance consists of a list of supply and disposal condition codes and brief definitions of each condition code. DRMS has not developed detailed narrative guidance with explanations and examples of how to apply these codes. However, we also found that the military services are not correctly using the listed supply and disposal condition codes on their excess property turn-in documents. For example, when military units assigned supply condition codes indicating that new, unused items were unserviceable or condemned, they also used the disposal condition code for repairable, rather than the code for new, unused. Military units had differing views about whether unserviceable condition meant that items were unserviceable for their purposes or unserviceable to anyone. As a result, we found that items in the same condition would be coded serviceable by one military unit and unserviceable by another. In addition, our analysis of turn-ins of unserviceable items found a lack of training, guidance, and supervision at one Navy unit. For example, Navy officials at the North Island Naval Aviation Depot told us that the employee responsible for sending their excess property to the DRMO had never received formal training on disposal policies and procedures. Further, the officials told us that they did not have any manuals or written procedures that explained excess property turn-in procedures. As a result, the employee assigned condition codes H (unserviceable, condemned) or S (scrap) to all excess property turn-ins. We contacted GSA’s Director of Personal Property Management Policy to discuss the proper assignment of federal agency condition codes. The GSA Policy Director explained that DOD uses unique supply condition codes that are a combination of federal agency codes established by GSA and its own codes for identifying serviceable and unserviceable property. (App. III lists and defines the GSA and DOD condition codes.) The GSA Director told us that unreliable federal agency condition codes, including DOD condition codes, have presented a problem in GSA’s program for utilization of excess federal agency property within the federal government. For example, he noted that federal agency officials have told GSA that they cannot rely on condition codes assigned to excess property, and this had an impact on the effectiveness of GSA’s efforts to promote the use of excess DOD property within the federal government. We also found that the condition codes established by GSA do not provide for the identification of items that are nearly new, with little or no evidence of use. Because such items are not new and unused, they would be coded the same as items that may be well used and need minor repair. Further, the GSA codes do not provide for identification of items that are new and unused but technically obsolete to the current owner. The GSA Policy Director noted that because of the federal government’s increased reliance on technology, the need to identify obsolete items is becoming a governmentwide excess property disposal issue. He said that GSA would be willing to work with DOD and other federal agencies to develop a solution to these problems. We found hundreds of millions of dollars in potential waste and inefficiency associated with the failure to safeguard excess property inventory from loss, theft, and damage. As previously discussed, our statistical tests of excess commodity inventory at five DRMOs and five DLA supply depots identified significant numbers of missing items. Because the DRMOs and DLA supply depots had no documentation to show that these items had been requisitioned or sent to disposal contractors, they cannot assure that these items have not been stolen. According to DRMS data, DRMOs and DLA supply depots reported a total of $466 million in excess property losses related to damage, missing items, theft, and unverified adjustments over a period of 3 years. However, as discussed below, we have indications that this number is not complete. Also, because nearly half of the missing items reported involved military and commercial technology that required control to prevent release to unauthorized parties, the types of missing items were often more significant than the number of missing items. Weaknesses in accountability that resulted in lost and stolen property contributed to waste and inefficiency in the excess property reutilization program. As shown in table 6, our analysis of reported information on excess property losses at DRMOs and DLA supply depots found that reported losses for fiscal years 2002 through 2004 totaled $466 million. Because 43 percent of the reported losses related to military technology items that required demilitarization controls, these weaknesses also reflect security risks. GAO Standards for Internal Control in the Federal Government requires agencies to establish physical control to secure and safeguard assets, including inventories and equipment, which might be vulnerable to risk of loss or unauthorized use. However, our statistical tests of excess commodity inventory at five DRMOs and five DLA supply depots during fiscal year 2004 identified missing items involving entire turn-ins of some excess items as well as fewer items than reported in inventory (missing quantities) for other turn-ins. We referred locations with high occurrences of reported losses to our Office of Special Investigations for further investigation. Table 6 shows reported losses for fiscal years 2002 through 2004. DRMO losses. Our statistical samples identified missing turn-ins at two of the five DRMOs we tested and missing quantities at all five DRMOs tested, including many items that were in new, unused, and excellent condition. Because DRMO officials did not have documentation to show whether these items had been reutilized, transferred, sold, or destroyed, there is no assurance of whether the missing items reflected bookkeeping errors or if they related to theft. Missing items in our Columbus DRMO sample included turn-ins of 72 chemical and biological protective suits and 47 wet weather parkas that were subject to demilitarization controls and 7 sleeping bags, a cold weather coat, 4 pairs of cold weather trousers, 4 canteens, a central processing unit (CPU), and various other items. Most of the quantity errors we found at the Columbus DRMO related to military clothing items. Missing items in our Richmond DRMO sample included a computer; 10 CPUs; 13 computer monitors; 2 scanners; and 2 items that require trade security control, including an arm assembly for a helicopter blade and a computer data signal coder/decoder. Based on these losses, we requested DRMS summary reports on losses for all DRMOS during fiscal years 2002, 2003, and 2004 for further analysis. Reported losses include lost, damaged, and stolen items and adjustments for recordkeeping errors. We determined that the loss summary reports do not include all known losses. For example, only one of the nine turn-ins in our statistical sample that included missing items that were subject to demilitarization controls was included in the fiscal year 2004 loss summary reports. Further, missing quantities are generally reported as adjustments rather than lost or stolen items. According to DRMS data, of the total $62 million in reported fiscal year 2004 losses, the Warner Robins DRMO reported $22 million and the four DRMS demilitarization centers reported over $17 million. In addition, reported fiscal year 2004 losses at the contractor-operated Meade DRMO included over 1,000 turn-ins with a reported acquisition value of over $3 million dollars. Although the DRMO contract provides for fines of $2,500 per incident of loss if negligence is proven, we learned that contractor negligence could not be proven due to documented security weaknesses at the Meade DRMO. Uncorrected security weaknesses leave the Meade DRMO vulnerable to theft. Further, while DRMO loss reports require that a reason code be specified, we found that the reasons for nearly all (99.8 percent) of the reported DRMO losses for fiscal years 2002 through 2004 related to unknown reasons (76.6 percent) and unverified adjustments for bookkeeping and data-entry errors (23.2 percent). As a result, DRMS has no assurance of the extent to which theft may have occurred and gone undetected. In January 2005, DRMS officials told us that they had not yet performed a review of the excess property loss reports as a basis for identifying and correcting systemic weaknesses. Reported DRMO losses for the 3-year period included 76 units of body armor, 75 chemical and biological protective suits (in addition to those identified in our Columbus DRMO sample), 5 guided missile warheads, and hundreds of military cold weather parkas and trousers and camouflage coats and trousers. Three DRMOs—Kaiserslautern, Meade, and Tobyhanna—accounted for $840,147, or about 45 percent, of the nearly $1.9 million in reported fiscal year 2004 losses of military equipment items requiring demilitarization. DLA supply depot losses. Our statistical samples also showed missing items at four of the five DLA supply depots that we tested. Because depot officials did not have documentation showing that these items had been reutilized or sold, there is no assurance of whether the missing items related to theft. Missing items in our DLA depot statistical samples included the following: Two classified radio frequency amplifiers, a printed circuit board that is subject to trade security controls, and a circuit card assembly that required demilitarization (destruction) when no longer needed by DOD at DLA’s Norfolk supply depot. Trade security-controlled aircraft parts, including 17 aircraft landing gear drag link assemblies, 6 landing gear upper manifolds, and 3 cylinder and piston units used in aircraft landing gear at DLA’s Hill supply depot. Six computer controllers and a circuit card used in Army, Navy, and Air Force communications at DLA’s San Joaquin supply depot. We also obtained DRMS data on DLA supply depot reports of excess property losses, including missing and damaged property and unverified adjustments. As shown in table 6, reported DLA supply depot losses totaled $276 million for fiscal years 2002 through 2004. Of this amount, nearly $192 million related to excess property items that were subject to demilitarization and trade security controls. The summary reports that we obtained did not identify the reasons for most of the reported DLA supply depot losses. According to DRMS data, 18 DLA supply depots reported a total of $114 million in fiscal year 2004 excess property losses. Two supply depots reported 72 percent of these losses, including the DLA Oklahoma City supply depot with reported losses of 213,950 items totaling $41 million and DLA’s Warner Robins supply depot with reported losses of 4,911 items totaling $40 million. In addition, the San Diego and Tobyhanna DLA supply depots each reported about $6 million in fiscal year 2004 excess property losses. Types of items reported as lost, damaged, or possibly stolen included aircraft frames and parts, engines, laboratory equipment, and computers. In addition to reported losses, we found significant instances of property damage at DRMS liquidation contractor sales locations. Because all liquidation sales are final, buyers have no recourse when property is damaged subsequent to sale or is not in the advertised condition. As a result, customers who have lost money on bids related to damaged and unusable items might not bid again, or they may scale back on the amount of their bids in the future, affecting both the volume of excess DOD items liquidated and sales proceeds. The property damage that we observed at liquidation contractor sales locations is primarily the result of DRMS management decisions to send excess DLA supply depot property to two national liquidation sales locations without assuring that its contractor had sufficient human capital resources and warehouse capacity to process, properly store, and sell the volume of property received. Although DRMS headquarters officials were aware of this problem and made numerous visits to the Huntsville sales location beginning in January 2004, actions taken to address this problem have been inadequate. In addition, poorly maintained contractor warehouse facilities at one liquidation sales location resulted in severe water damage to excess DOD bandages and medical supply items that we purchased over the Internet at govliquidation.com. The DRMS liquidation sales contract and Web page conditions of sale state that DRMS is responsible for providing and maintaining the warehouse facilities used by the contractor. Property damage at the Huntsville, Alabama, liquidation sales location. In November 2004, we investigated reports of damage related to improper outside storage of excess items at the Huntsville, Alabama, liquidation sales location. In June 2003, DRMS initiated a recycle control point process, referred to as RCP, for DLA supply depots, whereby excess property remains in the depot warehouses during the reutilization screening process. At the end of the screening phase, property that does not require demilitarization by destruction or mutilation is to be shipped to one of two liquidation contractor national sales locations—Huntsville, Alabama, for DLA depots west of the Mississippi River and Norfolk, Virginia, for DLA depots east of the Mississippi. We determined that DRMS continued to send excess DLA supply depot property to the Huntsville sales location even though it was apparent after the first 6 months of shipments that the Huntsville location lacked the capacity to handle the large volume of property received from the DLA depots. For example, in early June 2004, the Area Manager for the Huntsville DRMO inspected the liquidation contractor’s warehouses and found that excess property had filled at least one contractor warehouse building entirely, blocking doors and fire extinguishers. The Area Manager advised contractor officials that this situation would not be viewed favorably during the joint safety, fire, and environmental inspection anticipated within the near future. In response, contractor officials removed sufficient property from the building to meet fire and safety regulations. As a result, numerous excess DOD property items were relocated outside to an unpaved lot about the size of a football field and covered with a number of blue plastic tarps. Most of these items were new and unused spare parts and electronic items received from DLA supply depots. In addition, wood furniture and metal file cabinets that were transferred to the contractor for liquidation sale by the co-located Huntsville DRMO were stored outside without any protection from the weather. According to DRMO officials, DRMS headquarters officials had visited the Huntsville sales location in March 2004; a second time in June 2004, when the property was placed on the outside lot; and again in September 2004, to observe the extent of the overflow. Despite the known risk of damaged and lost property, the volume of excess DLA depot property continued until September 2004, when DRMS headquarters made a decision to divert shipments from three western DLA supply depots to the Norfolk, Virginia, liquidation sales location. However, property continued to be stored outside until the week of October 18, 2004, when DRMS officials visited the Huntsville sales location. By that time, numerous property items had received extensive damage due to sun, wind, rain, and storms, including four hurricanes—Charlie, Frances, Ivan, and Jeanne—and tropical storms Bonnie and Matthew. DRMS officials disposed of some items and placed other items inside the warehouse. In addition, the Huntsville DRMO manager told us that wood computer furniture and filing cabinets that were in good condition at the time the DRMO turned them over to the liquidation contractor had been stored outside unprotected from weather. Because most of the furniture was ruined and the filing cabinets were rusted, they were sent to the landfill or sold as scrap. Figure 14 shows the outside location of the wood computer cabinets and other items in July 2004 when they were advertised for sale. Our inspection of the remaining damaged property identified numerous boxes that were missing property labels or had labels and shipping documentation that were illegible due to exposure to sun, wind, and rain. The missing documentation presents a significant problem because the sales contractor does not record receipts of excess DOD property in its sales inventory until items are processed for sale, which may not occur until several months after the items are received. DRMS officials told us that they are attempting to reconcile excess property shipments to liquidation contractor inventory. However, because excess property receipts were not recorded in sales inventory and property labels are missing or illegible, it will be very difficult, if not impossible, to fully reconcile sales inventory to excess property receipts. The photograph in figure 15 shows wooden boxes that have lost their property labels and are turning black due to rot. Property subject to damage at the Norfolk, Virginia, liquidation sales location. On December 2, 2004, we visited the Norfolk liquidation contractor sales location to determine whether DRMS action to resolve the capacity problems at the Huntsville sales location by diverting property to Norfolk, Virginia, had resulted in capacity problems at that location. We observed hundreds of cardboard and wooden boxes containing excess DOD property that were stored outside under blue plastic tarps and in shrink-wrapped stacks on pallets. Upon inspection, we noted that many of the boxes were already water-damaged. The photograph in figure 16 shows cardboard boxes stored outside at the Norfolk, Virginia, sales location that evidence weather damage in terms of peeling property labels and water marks. Damage to GAO purchase of bandages and medical supplies. Our October 7, 2004, Internet purchase of bandages and medical supplies from govliquidation.com suffered water damage because DRMS failed to adequately maintain the liquidation contractor’s Norfolk facilities. Our purchase included numerous usable items in original manufacturer packaging, including 35 boxes of bandages, 31 boxes of gauze sponges and surgical sponges, 12 boxes of latex gloves, and 2 boxes of tracheostomy care sets. We paid a total of $167, including buyer’s premium, tax, and transportation cost, for these items, which had a reported total acquisition cost of $3,290. However, the following week, when we arrived at the liquidation contractor’s Norfolk, Virginia, sales location to pick up our purchase, it was raining and the roof on the contractor’s warehouse building was leaking. The boxes containing the items we had purchased had become wet, and water dripped from some of the boxes when contractor personnel loaded them into our rental truck. The photograph in figure 17 illustrates the damaged condition of the items we purchased. Most of the cardboard storage boxes were deteriorating as a result of water damage, and items inside the boxes were wet. Although the sales lot containing the bandages and medical supplies that we purchased was advertised as 4 pallets of items, it actually consisted of 13 pallets. The truck we rented would not accommodate all 13 pallets of items. The liquidation contractor sales representative told us that we could take as much as we could accommodate, and the contractor would resell the remaining items, even though the boxes on the remaining 8 pallets of bandages and medical supplies were also wet. We found that customers who find that the property they purchased is damaged have no recourse. Further, the liquidation contractor’s terms of sale provide no incentive for safeguarding property held for sale. For example, under the contractor’s terms of sale, all sales are final and items are sold in “as is” condition. The liquidation sales contractor disclaims all warranties, express and implied, without limitation, including loss or liability resulting from negligence. Credit card account numbers must be provided at the time a bid is made, and the sales cost, buyer premium, and sales tax, if applicable, are immediately charged to the winning bidder. Inefficient, nonintegrated excess inventory and supply management systems lack controls necessary to prevent waste and inefficiency in the reutilization program. For example, because the DRMS Automated Inventory System (DAISY) and DLA’s Standard Automated Materiel Management System (SAMMS) are outdated and nonintegrated, they do not share information necessary to (1) identify and alert DLA item managers of excess property that is available to fill supply orders and (2) prevent purchases of new items when A-condition excess items are available for reutilization. We have continued to report that long-standing weaknesses with DLA’s inventory systems related to outdated, nonintegrated legacy systems and processes result in DOD and military units not knowing how many items they have and where these items are located. DLA has acknowledged serious deficiencies in its automated inventory management systems. Although DLA has an effort under way to replace SAMMS with the Business Systems Modernization (BSM) and DRMS has a Reutilization Modernization Program (RMP) under way to upgrade DAISY, so far these have been separate, uncoordinated efforts and they do not adequately address identified process deficiencies. Also, while the systems improvement efforts are intended to integrate supply and excess inventory systems to support the reutilization program, they are not focused on resolving long-standing problems related to unreliable condition code data and incomplete data on NSNs. The accuracy of these two data elements is critical to the ability to identify like items that are available for reutilization at the time purchases are made. We found that existing systems and processes do not adequately reflect the DRMS twofold mission to (1) facilitate reutilization of property in good condition and (2) dispose of property that DOD cannot use. For example, DRMS moves all excess property through the same 49-day screening and disposal process rather than identifying A-condition items that are currently being purchased, stocked and issued, or both to military units and designating these items for reutilization. Instead, as previously discussed, DRMS transferred, donated, sold, and destroyed hundreds of millions of dollars of A-condition excess items that the military services continued to purchase and utilize. In addition, we found that the current process for identifying excess property that is available to fill supply orders is cumbersome, time- consuming, and involves significant human intervention. For example, under the current process, if an item manager wants to use excess items to fill a supply order, the item manager must query DAISY to determine whether excess items are available to fill the supply order. If excess items are available, the item manager would then need to contact one or more DRMOs where the excess property is located and ask DRMO personnel to physically verify the item description, quantity, and condition. If the excess items meet the customer’s requirements, the item manager prepares a requisition form and submits it to the DRMO(s). If the item does not require technical inspection or testing, the DRMO processes the order and ships the excess items to the customer. However, if the item is electronic and requires technical inspection and testing, or both, it must be sent to a DLA supply depot where these functions can be performed before the item is shipped to the customer. Military unit officials told us that due to inefficiencies in this process, including shipment delays of up to several weeks and unreliable DRMS data on quantities and condition codes, they prefer to order new items rather than attempting to reutilize excess property available at DRMOs. Figure 18 illustrates the current nonintegrated DLA inventory systems environment. According to DLA officials, the planned BSM and RMP excess property reutilization systems are intended to be integrated when fully implemented in 2009. The objective of the integrated design is to provide DLA buyer and item manager visibility over excess property available for reutilization and permit the buyer to fill a supply order with these items instead of purchasing new items. However, we are concerned that these efforts may not resolve the long-standing data reliability problems inherent in the current systems and processes. Our November and December 2004 discussions with DLA and DRMS systems officials revealed that they were unaware of the magnitude of errors in condition coding that incorrectly recorded new and unused items as unserviceable and the extent of inconsistent recording of NSNs in commodity purchases and excess inventory records. Further, the officials had not yet coordinated to identify key data elements for identifying excess property that should be reutilized. We also found that DLA and DRMS systems officials had not yet fully considered building controls into the new business systems that would help enforce the policy to reutilize available excess property in new, unused, and excellent condition before purchasing new items. For example, under the current systems environment, item managers and military units can choose to purchase new items rather than reutilizing available new, unused, and excellent condition excess items. In order to avoid this problem in the planned systems environment, DLA would need to include edit controls that would reject a purchase transaction or generate an exception report for review and approval when such items are available for reutilization but are not selected. We discussed our concerns with DLA officials. In early February 2005, DLA officials told us that they were extending the March 2005 target date for completing the functional design for excess property reutilization in BSM and RMP in order to address our concerns about the impact of unreliable data on the successful integration of the planned systems. DLA and DRMS have not demonstrated the leadership and accountability necessary to achieve the economy and efficiency of excess property reutilization contemplated in federal regulations or DOD policy. To effectively address problems with reutilization program waste and inefficiency, DRMS and DLA will need to exercise strong leadership and accountability to improve the reliability of excess property data; establish effective oversight and physical inventory control, including both accountability and safeguarding of excess property; and develop effective integrated systems for identifying and reutilizing excess property. In addition, the military services will need to provide accurate information on excess property turn-in documentation, particularly data on condition codes, and item descriptions, including NSNs. Improved management of DOD’s excess property and a strong reutilization program could help save taxpayers hundreds of millions of dollars annually. We recommend that the Secretary of Defense direct the Director of the Defense Logistics Agency; the Commander of the Defense Reutilization and Marketing Service; and the Secretaries of the Army, the Navy, and the Air Force, as appropriate, to take the following 13 actions to improve DOD’s excess property reutilization program. Direct DRMS to clarify and enforce the policy that permits DRMO management to waive the requirement to verify quantities on turn-ins under exempted conditions, and consider additional criteria for maintaining accountability of military equipment items. Require DRMS to identify DRMOs with insufficient human capital resources and take appropriate action to assure that excess property receipts are verified and processed in an accurate and timely manner. In implementing this recommendation, DRMS should require DRMOs to provide adequate supervision and monitoring to assure that excess property receipts are verified when received and entered in DRMO inventory. Require DLA to develop a mechanism for linking prime vendor purchase transactions to NSNs or other unique product identification. Direct DRMS to develop written guidance and formal training to assist DRMO personnel and military service turn-in generators in the proper assignment of condition codes to excess property turn-ins. Direct the military services to provide accurate excess property turn-in documentation to DRMS, including proper assignment of condition codes and NSNs based on available guidance. Require the military services to establish appropriate accountability mechanisms, including supervision and monitoring, for assuring the reliability of turn-in documents. Direct DLA and DRMS to review DLA supply depot and DRMO excess property loss reports to identify systemic weaknesses and take immediate and appropriate corrective actions to resolve them. Direct DRMS to take immediate, appropriate action to resolve identified uncorrected DRMO security weaknesses. Require DRMS to determine the monthly sales volume of excess property at the DLA supply depots and work with its liquidation sales contractor to identify the appropriate number and liquidation sales locations needed to handle the sales of excess DLA depot property. In making these determinations, DRMS and its contractor should consider whether contractor staffing and warehouse capacity at each location are adequate to handle the volume of property shipped to those locations for sale. Require DRMS to periodically inspect liquidation contractor facilities and take immediate action to correct structural impairments and other deficiencies, such as outside storage due to inadequate warehouse capacity that could result in damage of excess DOD property held for sale. Direct DLA and DRMS to consider available options and implement an interim process for identifying turn-ins of excess new, unused, and excellent condition items that could be reutilized to avoid unnecessary purchases in the existing systems environment. Direct DLA BSM and DRMS RMP systems officials to coordinate on the identification of key data elements for identifying excess property that should be reutilized before completing the design of functional requirements for reutilization of excess commodities for BSM and RMP. Require that DLA’s BSM system design include edit controls that would reject a purchase transaction or generate an exception report when A- condition excess items are available but are not selected for reutilization at the time that purchases are made. On April 15, 2005, DOD provided written comments on a draft of this report. DOD officials concurred with 8 of our 13 recommendations and partially concurred with the other 5 recommendations. With regard to the 5 recommendations on which DOD partially concurred, DOD’s stated actions address all 5 of them. We view these actions as being generally responsive to the intent of our recommendations. The partial concurrences relate to plans for alternative actions, actions already initiated in response to our audit, and increased attention to existing processes. DOD’s explanation for the partial concurrences and our response follows. DOD stated that DRMS will use an alternative action to address our recommendation that it assess the adequacy of human capital resources and take appropriate action to assure that excess property receipts are verified and processed accurately and timely. DOD stated that DRMS will use its staffing model to determine the staffing needs by receipt workload and adequately staff its DRMOs. DOD also stated that DRMS is using contract hires to supplement DRMO staff, as needed. We view these actions as responsive to our recommendation. However, as a part of its actions on our recommendation, DRMS also should provide adequate supervision and monitoring to assure that excess property receipts are verified when received and entered into DRMO inventory. We have modified our recommendation to emphasize this point. These actions will help to provide accountability for excess property and avoid the need for subsequent adjustments, including an excessive number of write-offs for inventory shortages. DOD noted the merits of existing processes related to our recommendation to develop a mechanism for linking prime vendor purchase transactions to NSNs or other unique product identification. DOD stated that DOD directives require turn-in generators to provide a description of item(s) on a turn-in document for which local stock numbers are listed. DOD also noted that bringing unused items back into DLA supply stock would negate warehousing and distribution savings achieved through using prime vendor direct shipments to DOD customers. In addition, DOD stated that assigning NSNs to nonstocked commercial items would significantly increase item costs and run counter to the Federal Acquisition Streamlining Act of 1994 preference for commercial purchases. As discussed in our report, DOD already has efforts underway to promote the use of unique product identifiers other than NSNs by commercial vendors and small business firms. DOD’s efforts include cost benefit considerations. Consistent with DOD’s efforts, it is important that DLA prime vendor purchase transactions are identified to NSNs or other unique product identification to facilitate economies through (1) volume purchasing and (2) reutilization of excess items. With regard to our recommendation that DRMS develop written guidance and formal training on the proper assignment of condition codes to excess property turn-ins, DOD stated that the military services currently receive formal blocks of training and are in the better position to assign the condition codes. DOD also referred to current DOD and DRMS guidance on condition codes. In addition, DOD stated that DRMS will review current guidance to ensure the appropriate assignment of responsibilities regarding the establishment and use of condition codes. As discussed in our report, our statistical tests, DRMO screening visits, case study acquisitions of excess DOD commodity items, and interviews of DRMO, military service, and GSA officials all indicate that significant problems exist with the reliability of excess property condition codes. We determined that unreliable condition codes were caused by a lack of detailed guidance and a failure to follow existing guidance. For example, as noted in our report, military services often coded items as unserviceable when they no longer had a need for them, even though the items were in new, unused, and excellent condition. Therefore, written guidance and training on the proper assignment of condition codes also is important to correcting this problem to assure that existing misconceptions are corrected and would be responsive to our recommendation. With regard to our recommendation that DRMS periodically inspect liquidation contractor facilities and take immediate action to correct structural impairments and other deficiencies, such as storage capacity, DOD stated that an inspection of all liquidation contractor facilities has been completed and periodic inspections will continue. DOD also stated that the only facility requiring immediate structural repair is the Norfolk, Virginia, facility and that DRMS has issued a work order for the necessary repairs. DOD also stated that additional storage options are being regularly evaluated by the contractor and DRMS. As stated in our report, the overflow of excess property at the Huntsville liquidation sales location was a long-term, uncorrected problem, which resulted in a significant breakdown in accountability and physical inventory control over excess property. It is important that timely and appropriate solutions be identified and implemented to prevent this problem in the future. The actions that DOD highlighted in its letter are responsive to our recommendation. Finally, DOD stated that actions have already been taken to respond to our recommendation that DRMS consider available options and implement an interim process for identifying turn-ins of excess new, unused, and excellent condition items that could be reutilized to avoid unnecessary purchases in the existing systems environment. DOD enumerated initiatives implemented during 2004 and early 2005 that improve the visibility of excess property listed on DRMS’s Web page. In addition, DOD stated that DRMS will work with DLA item managers on the best methodology to provide visibility of A-condition excess property. Notwithstanding the improvements in DRMS’s Web page, the overall commodity purchasing process has not changed, and DLA continues to make commodity purchases without considering the availability of identical A-condition excess commodities. Achieving the economy and efficiency contemplated by federal regulations and DOD policy is dependent upon identifying continuing commodity purchases and having the ability to match these items to A-condition excess property and hold it for reutilization. DOD should not dispose of excess A-condition excess items that it continues to purchase. DOD’s comment letter is reprinted in appendix II. As agreed with your offices, unless you announce its contents earlier, we will not distribute this report until 30 days from its date. At that time, we will send copies to interested congressional committees; the Secretary of Defense; the Deputy Under Secretary of Defense for Logistics and Materiel Readiness; the Secretary of the Army; the Secretary of the Navy; the Secretary of the Air Force; the Director of the Defense Logistics Agency; the Commander of the Defense Reutilization and Marketing Service; and the Director of the Office of Management and Budget. We will make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact Gregory D. Kutz at (202) 512-9505 or [email protected], John Ryan at (202) 512-9587 or [email protected], or Gayle L. Fischer at (202) 512- 9577 or [email protected] if you or your staff have any questions concerning this report. Additional contacts and major contributors to this report are provided in appendix VI. The purpose of our audit was to assess the economy and efficiency of the Department of Defense (DOD) excess property program. In doing so, we assessed the effectiveness of systems, processes, and controls for assuring a strong reutilization program. Where we found controls to be ineffective, we tested them further to determine (1) the magnitude and (2) root causes of associated waste and inefficiency. Our audit and investigation focused on Defense Logistics Agency (DLA) purchases of consumable items and Defense Reutilization and Marketing Service (DRMS) excess property inventory activity during fiscal years 2002 and 2003, the most current fiscal years for which data were available at the time we initiated our audit. To illustrate continuing problems, we obtained excess DOD commodity items in new, unused, and excellent condition (A condition) during fiscal year 2004 and the first quarter of fiscal year 2005 that were in use by the military services, were being purchased by DLA, or both at the time they were available for reutilization. We obtained access to the following systems and databases to support our audit and investigation. The DRMS Automated Information System (DAISY), which is an automated inventory accounting management data system designed to process excess DOD property from receipt to final disposal. The DRMS Management Information Distribution and Access System (MIDAS), which contains historical (archive) DAISY information. DLA’s DOD Activity Address Directory (DODAAD), which contains information to identify agency names and addresses for activity codes that are associated with excess property requisitions. The Government Liquidation, LLC database, which contains transactions on public sales of excess DOD property items. DLA’s Standard Automated Materiel Management System (SAMMS), which contains transaction data on purchases by commodity group. The Federal Logistics Information System (FEDLOG), which is a logistics information system managed by the Defense Logistics Information Service (DLIS) within DLA. This system contains detailed information on specifications, use, acquisition cost, and sources of supply for national stock numbered items, including more than 7 million stock numbers and more than 12 million part numbers. We obtained online access to DAISY, MIDAS, DODAAD, and FEDLOG, and we obtained copies of the SAMMS databases for fiscal years 2002 and 2003 and Government Liquidation, LLC databases for June 2001 through December 2004. For each of the DOD systems and databases used in our work, we (1) obtained information from the system owner/manager on their data reliability procedures; (2) reviewed systems documentation; (3) reviewed related DOD Inspector General reports, DLA Comptroller budget data, and independent public accounting firm reports related to these data; and (4) performed electronic testing of commodity purchase and excess inventory databases to identify obvious errors in accuracy and completeness. We verified database control totals, where appropriate. We also received FEDLOG training from the DLIS service provider. When we found obvious discrepancies, such as omitted national stock number (NSN) data in the DLA commodity purchases databases and transaction condition coding errors in the DRMS excess property systems data, we brought them to the attention of agency management for corrective action. We made appropriate adjustments to transaction data used in our analysis, and we disclosed data limitations with respect to condition coding errors and the omission of NSN data that affected our analysis. Our data analysis covered commodity purchases and excess commodity turn-ins and disposal activity during fiscal years 2002 and 2003. In addition, we statistically tested the accuracy of excess inventory transactions at five Defense Reutilization and Marketing Offices (DRMO) and five DLA supply depots. We also reviewed summary data and selected reports on DRMS compliance reviews of 91 DRMOs during fiscal year 2004 to determine the extent to which DRMS had identified problems with adherence to DOD and DRMS policies, made recommendations for corrective actions, and monitored DRMO actions to address its recommendations. Based on these procedures, we are confident that that the DOD data were sufficiently reliable for the purposes of our analysis and findings. To determine the overall magnitude of waste and inefficiency related to the DOD excess property reutilization program, we identified fiscal year 2002 and 2003 excess commodity disposal activity by property condition code and examined the extent of DOD reutilization of excess items in new, unused, and excellent condition (A-condition) versus transfers, donations, public sales, and other disposals outside of DOD through scrap, demilitarization, and hazardous materials contractors. We also compared DLA commodity purchase transactions to identical excess new, unused, and excellent condition items to identify instances where DLA purchased commodity items rather than reutilizing these excess items. We used NSN data as the basis for identifying identical items. In addition, we analyzed DLA supply depot excess commodity turn-ins to determine the extent to which new, unused DLA supply depot inventory accounted for turn-ins of excess of A-condition items. We used IDEA audit software to facilitate our analysis. To determine the extent to which DOD reutilized excess commodities in A condition during fiscal years 2002 and 2003, we used online access to the DRMS MIDAS database of historical transactions and performed data mining and analysis of the universe of excess commodity turn-in and disposal transactions. We identified key data elements, such as disposal transaction types, the excess property recipient DOD Activity Address Code (DODAAC), and condition codes. We used these data elements to identify the extent of DOD reutilization of excess A-condition commodities compared to transfers; donations; public sales; and disposals of scrap, hazardous materials, and demilitarized items. We determined the type of disposal transaction through analysis of the DODAAC that identifies the name and address of the agency or program that received (or requisitioned) the property. Because DOD considers special program reutilization the same as DOD reutilization, we used DODAACs to separately identify reutilization transactions for special programs that were not directly associated with DOD activities. We also used DODAAC information to determine the identity of turn-in generators and requisitioners of excess DOD commodities for subsequent interviews of generators regarding why new, unused items were excessed and excess property users about their experience. We also worked with DRMS officials to obtain information on transaction codes for identifying disposals of hazardous materials, scrap, and demilitarized items. We independently performed data mining and analysis, and we verified the results of our queries with DRMS officials in order to provide reasonable assurance that our data-mining approach and results were accurate. We used the Government Liquidation, LLC database to determine the acquisition value of commodity items sold and sale revenues during fiscal years 2002 and 2003. We used the six SAMMS commodity purchases databases we obtained to identify key information on commodity items that military units purchased from DLA, including the item description or name, NSN, purchase date, unit price, unit acquisition cost, and full cost including the DLA user fee. The six commodity groups we audited included (1) construction and land and maritime weapons, (2) electrical, (3) general, (4) industrial, (5) medical, and (6) textile. We worked with DLA officials to identify items to a commodity group based on the supply class number included in the NSN or local stock number (LSN). To determine the extent to which DLA made unnecessary purchases of new items when identical items that were reported to be in A condition were available for reutilization, we compared commodity purchase transactions in SAMMS to excess property turn-in transactions in MIDAS. We used NSNs to identify instances where the military services ordered and purchased items from DLA at the same time identical items that were reported to be in new or excellent condition were available for reutilization. Although we identified at least $400 million in fiscal year 2002 and 2003 wasteful purchases related to A-condition excess items that were available for reutilization, we were unable to determine the full magnitude of this problem due to inconsistent recording of NSNs and improper downgrading of condition codes. We performed case study investigations of excess commodity turn-ins and disposals during fiscal years 2002 through 2003. In addition, to illustrate that DRMS reutilization program waste and inefficiency are continuing problems, during fiscal year 2004 and the first quarter of fiscal year 2005, we obtained several excess DOD commodity items that were currently in use, were being purchased at the time we acquired them, or both. We used data mining and analysis to identify commodity items for our case study acquisitions. To identify new and unused excess DOD commodity items that were available for requisition at no cost, we accessed the DRMS Reutilization, Transfer, and Donation Web page and identified excess DOD commodity items available to federal agencies. We confirmed that these items were available to federal agencies by also accessing the General Services Administration’s (GSA) GSAXcess Web page. We used GAO’s federal agency DODAAC to requisition new and unused excess DOD commodity items in A condition. We submitted our requisitions for transfer of these excess DOD items through GSA. To identify new and unused items that we could purchase at minimal cost, we accessed govliquidation.com. We also accessed govliquidation.com to identify continuing sales of our case study items. We based our case study selections on commodities used by military units and the quantity and dollar amount of purchases and excess property turn- ins associated with these items. After we identified each new and unused case study item that we wanted to purchase, we queried FEDLOG to confirm the acquisition cost and current use of the item—that is, whether an item was still being purchased or currently in use but being phased out or was obsolete. For further assurance on the status of the excess commodities that we targeted for acquisition, we contacted the DLA item managers responsible for these items to confirm that they were currently being purchased, were in use by the military services, or both. We also contacted item managers to obtain information on how certain items, such as circuit cards and power supply units, were used. To determine the root causes of identified inefficiencies, we first gained an understanding of the processes for acquisition and disposal of DOD commodities. We reviewed applicable laws and regulations and DOD, military service, DLA, and DRMS policies and procedures. We also reviewed the DRMS contracts for DRMO property warehouse services and liquidation sales for consistency with DOD policies. In addition, we reviewed SAMMS and MIDAS system manuals. We met with and contacted numerous DLA and DRMS officials and obtained documentation to assess how the property reutilization program is monitored for effectiveness. We also met with or contacted DOD and Army, Navy, and Air Force officials about their experience with commodity acquisitions, reutilization, and disposals. We interviewed DLA item managers and buyers to obtain information on their roles and responsibilities and key systems and controls involved in the commodity acquisition and management process. We also obtained information on how decisions are made about whether to purchase new items or to reutilize excess items through DOD’s reutilization program. We made visits to 12 DRMOs to observe excess property processing, screen for excess case study items, investigate the disposition of excess property turn-ins, or test the accuracy of excess property inventory. We also visited five DLA-managed Defense depots to test inventory accuracy and observe excess property disposal processes. In addition, we visited 10 Government Liquidation, LLC sales locations. We focused our assessment of the causes of reutilization program waste and inefficiency on key aspects of the overall management control environment, including (1) data reliability, (2) physical inventory control, and (3) the current systems environment. We used GAO’s Standards for Internal Control in the Federal Government as criteria for identifying internal control breakdowns that contributed to waste and inefficiency. We statistically tested the accuracy of current excess commodity inventory transaction data at five DRMO warehouse locations and five DLA supply depot locations. Each location was a separate population of randomly selected transactions. We randomly selected transactions from the population of current inventory transactions at each of the test locations. The five DRMO locations we tested were the Columbus DRMO in Ohio; the Stockton DRMO in French Camp, California; the Hill DRMO at Hill Air Force Base in Ogden, Utah; and the Norfolk DRMO and the Richmond DRMO in Virginia. Our selection of the five DRMOs was based on geographic location, turn-in volume, types of excess items handled, and military units generating the most turn-ins. We tested inventory at Defense depots that were co-located or located within proximity of the above DRMOs, including Defense depots at Columbus, Ohio; San Joaquin, California; Hill Air Force Base, Utah; Norfolk, Virginia; and Richmond, Virginia. Each location was a separate population, and we evaluated the results of each sample location separately. The purpose of our testing was to evaluate the effectiveness of controls over existence—including timely recording of transactions, item description (item name and NSN), and quantity—and condition coding. Appendix V describes the specific criteria we used to conclude on the effectiveness of DRMO and DLA supply depot controls for inventory accuracy. Our assessment of physical inventory control focused on the results of our statistical tests discussed above and our review of DRMS summary data on reported DRMO and DLA supply depot losses due to lost, stolen, and damaged property. We investigated problems associated with liquidation contractor controls for safeguarding excess DOD property held for sale at the Huntsville, Alabama, and the Norfolk, Virginia, sales locations. We also assessed the extent of damage to our case study purchase of bandages and medical supply items from the Norfolk sales location. In addition, we obtained DRMS summary reports on losses of excess property at DRMOs and DLA supply depots for fiscal years 2002 through 2004. We referred locations with the largest reported losses to our Office of Special Investigations for further investigation. To gain an understanding of DLA commodity purchase and DRMS commodity inventory systems and processes with regard to DOD’s excess property reutilization program, we reviewed DLA and DRMS policies and procedures, and interviewed DLA, DRMS, and DRMO program and systems officials. We also used observations and information obtained during our statistical tests, excess property screening visits, and case study investigations. In addition, we relied on the body of work GAO has performed in this area. To determine the scope and status of DLA and DRMS systems efforts to improve the reutilization process in the future, we interviewed DLA and DRMS systems officials who are responsible for DLA’s Business Systems Modernization (BSM) and Integrated Data Environment (IDE) and the DRMS Reutilization Modernization Program (RMP). We also reviewed business systems modernization plans and related documents to determine the current status, implementation time frames, and scope of planned improvements. In addition, we obtained and reviewed the Reutilization Management Program Functional Requirements Document, the RMP Decision Matrix, and implementation timelines. We focused our assessment on whether the systems modernization efforts, as currently documented, would adequately address needed improvements in excess property reutilization program economy and efficiency. We conducted our work from November 2003 through February 2005 in accordance with U.S. generally accepted government auditing standards. We performed our investigative work in accordance with standards prescribed by the President’s Council on Integrity and Efficiency. DOD’s condition code is a two-digit alphanumeric code used to denote the condition of excess property from the supply and the disposal perspective. The DOD supply condition code is the alpha character in the first position and shows the condition of property in the DLA depot inventory, or is assigned by the unit turning in the excess property. The GSA disposal condition code, in the second position, shows whether the property is in new, used, or repairable condition, salvageable, or should be scrapped. (See table 7.) Table 8 lists the DOD special programs that are authorized to receive excess property. In addition to DOD special programs, under the Stevenson-Wydler Technology Innovation Act of 1980, as amended, DOD makes computer equipment available to schools under the federal government’s Computers for Learning Program following the DOD and special program screening period and prior to the federal agency screening period. In accordance with 15 U.S.C. § 3710(i), the director of a laboratory or the head of any federal agency or department may loan, lease, or give research equipment that is excess to the needs of the laboratory, agency, or department to an educational institution or nonprofit organization for the conduct of technical and scientific education and research activities. To evaluate the effectiveness of controls for assuring the accuracy of excess commodity inventory data, we tested current inventory transactions at five DRMO locations and five DLA supply depot locations. Our tests covered controls over physical existence, item description (item name and NSN), quantity, and condition code. DRMO inventory locations tested were the Columbus DRMO in Columbus, Ohio; the Stockton DRMO in French Camp, California; the Hill DRMO at Hill Air Force Base, in Ogden, Utah; the Norfolk DRMO in Norfolk, Virginia; and the Richmond DRMO in Richmond, Virginia. For efficiency, we tested inventory at five DLA supply depots that were co-located or located within proximity of the above DRMOs, including the depots in Columbus, Ohio; San Joaquin County, California; Hill Air Force Base, Utah; Norfolk, Virginia; and Richmond, Virginia. Each location was a separate population, and we evaluated the results of each sample location separately. We drew our statistical samples from the universe of excess property transactions in current DRMS DAISY inventory, which includes excess property warehoused at DRMOs and DLA supply depots. We stratified our samples by the two major categories of condition code—serviceable and unserviceable—in order to determine whether errors were more prevalent in one category. From the population of current excess DOD inventory at the time of our testing visit, we selected stratified random probability samples of excess property turn-in transactions for each of the five DRMO and each of the five DLA supply depot case study locations. With these statistically valid samples, each transaction in the population for the 10 case study locations had a nonzero probability of being included, and that probability could be computed for any transaction. Each sample transaction for a test location was subsequently weighted in our analysis to account statistically for all the transactions in the population for that location, including those that were not selected. Our test results relate to the populations of transactions at the respective DRMO and DLA supply depot locations, and the results cannot be projected to the population of excess property transactions or the DRMOs or DLA supply depots as a whole. We present the results of our statistical samples for each population as (1) our projection of the estimated error overall and for each control attribute as point estimates and the two-sided 95 percent confidence intervals for the failure rates and (2) our assessments of the effectiveness of the controls and the relevant lower and upper bounds of a one-sided 95 percent confidence interval for the failure rate. If the one-sided upper bound is 5 percent or less, then the control is considered effective. If the one-sided lower bound is greater than 5 percent, then the control is considered ineffective. Otherwise, we say that there is not enough evidence to assert either effectiveness or ineffectiveness. All percentages are rounded to the nearest percentage point. Tables 9 and 10 present the overall results of our statistical tests of inventory accuracy at the five DRMOs and the five DLA supply depots that we tested. The overall results show that controls for assuring the accuracy of excess property inventory were ineffective at four of the five DRMOs and three of the five DLA supply depots that we tested. We tested physical existence, including whether turn-ins recorded in inventory could be physically located and whether inventory changes were recorded within 7 days. We also tested the accuracy of item descriptions (item name(s) and NSN(s)), recorded quantities, and condition code categories. Because most of the errors we found related to the accuracy of condition codes, we separately estimated the error rates for this control attribute. A turn-in transaction was considered a failure if the serviceable or unserviceable condition code assigned to the item(s) was not accurate based on our physical observation and judgment. DLA and DRMO officials who accompanied us during our testing provided their perspectives, which we considered in our conclusions. We based our conclusions on obvious differences between the condition code assigned to the item and the appearance of the item. For example, some items were in the original manufacturer packaging and other items were obviously used, dirty, or worn. If we were unsure of the condition of an item, we accepted the condition code assigned by the military unit turn-in generator or the DLA supply depot. In addition, we did not question the assigned condition codes of technical equipment items such as electronic parts and scientific equipment. Tables 11 through 13 show the results of our condition code reliability tests for turn-in transactions at the five DRMOs that were coded as being in serviceable and unserviceable condition. As shown in table 13, we found significant problems with the accuracy of unserviceable condition codes for excess commodities at four of the five DRMOs we tested. As shown in table 14, we found condition codes to be reliable at the five DLA supply depots that we tested. Staff making key contributions to this report include Beatrice Alff, Mario Artesiano, James D. Ashley, Cindy Barnes, Gary Bianchi, Erik Braun, Matthew S. Brown, Randall J. Cole, Tracey L. Collins, Francine DelVecchio, Lauren S. Fassler, Michele Fejfar, Gloria Hernandezsaunders, Wilfred B. Holloway, Jason Kelly, Barbara C. Lewis, Kristen Plungas, and Ramon Rodriguez. Technical expertise was provided by Sushil K. Sharma, PhD, DrPH, and Keith A. Rhodes, Chief Technologist. | Based on limited previous GAO work that identified examples of purchases of new items at the same time identical items in excellent or good condition were excessed, GAO was asked to assess the overall economy and efficiency of the Department of Defense (DOD) program for excess property reutilization (reuse). Specifically, GAO was asked to determine (1) whether and to what extent the program included waste and inefficiency and (2) root causes of any waste and inefficiency. GAO was also asked to provide detailed examples of waste and inefficiency and the related causes. GAO's methodology included an assessment of controls, analysis of DOD excess inventory data, statistical sampling at selected sites, and detailed case studies of many items. DOD does not have management controls in place to assure that excess inventory is reutilized to the maximum extent possible. Of $18.6 billion in excess commodity disposals in fiscal years 2002 and 2003, $2.5 billion were reported to be in new, unused, and excellent condition. DOD units reutilized only $295 million (12 percent) of these items. The remaining $2.2 billion (88 percent) includes significant waste and inefficiency because new, unused, and excellent condition items were transferred and donated outside of DOD, sold for pennies on the dollar, or destroyed. DOD units continued to buy many of these same items. GAO identified at least $400 million of commodity purchases when identical new, unused, and excellent condition items were available for reutilization. GAO also identified hundreds of millions of dollars in reported lost, damaged, or stolen excess property, including sensitive military technology items, which contributed to reutilization program waste and inefficiency. Further, excess property improperly stored outdoors for several months was damaged by wind, rain, and hurricanes. To illustrate continuing reutilization program waste and inefficiency, GAO ordered and purchased at little or no cost several new and unused excess commodities that DOD continued to buy and utilize, including tents, boots, power supplies, circuit cards, and medical supplies. GAO paid a total of $1,471, including tax and shipping cost, for these items, which had an original DOD acquisition cost of $68,127. Root causes for reutilization program waste and inefficiency included (1) unreliable excess property inventory data; (2) inadequate oversight and physical inventory control; and (3) outdated, nonintegrated excess inventory and supply management systems. Procurement of inventory in excess of requirements also was a significant contributing factor. Improved management of DOD's excess property could save taxpayers at least hundreds of millions of dollars annually. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Federal agencies conduct a variety of procurements that are reserved for small business participation (through small business set-aside and sole- source opportunities, hereafter called set-asides). The set-asides can be for small businesses in general or be specific to small businesses meeting additional eligibility requirements in the Service-Disabled Veteran-Owned Small Business Concern (SDVOSBC), Historically Underutilized Business Zone (HUBZone), 8(a) Business Development, and WOSB programs. The WOSB program, which started operating in 2011, has requirements that pertain to the sectors in which set-asides can be offered as well as eligibility requirements for businesses. That is, set-aside contracts under the WOSB program can only be made in certain industries in which WOSBs were substantially underrepresented and EDWOSBs underrepresented, according to the program regulation. Additionally, only certain businesses are eligible to participate in the WOSB program. The business must be at least 51 percent owned and controlled by one or more women. The owner must provide documents demonstrating that the business meets program requirements, including submitting a document in which the owner attests to the business’s status as a WOSB or EDWOSB. The program’s authorizing statute directs that each business either be certified by a third party, or self-certified by the business owner. SBA’s final rule includes these two methods. Self-certification is free and businesses pay a fee for third-party certification. A third-party certifier is a federal agency, state government, or national certifying entity approved by SBA to provide certifications of WOSBs or EDWOSBs. To be approved as certifiers, interested organizations submit an application to SBA that contains information on the organization’s structure and staff, policies and procedures for certification, and attestations that they will adhere to program requirements. SBA has approved four organizations to act as third-party certifiers: El Paso Hispanic Chamber of Commerce; National Women Business Owners Corporation; U.S. Women’s Chamber of Commerce; and Women’s Business Enterprise National Council. The most active certifier is the Women’s Business Enterprise National Council (WBENC), which completed about 76 percent of all WOSB third- party certifications performed from August 2011 through May 2014. To conduct the certifications, WBENC uses 14 regional partner organizations. The fees for certification vary depending on a WOSB’s gross annual sales, membership status in the certifying organization, and geographic location (see table 1). In the case of businesses that seek a WOSB program certification through WBENC’s partner organizations, businesses that pay for a Women’s Business Enterprise certification (used for private- sector or some local, state, and federal procurement, but not for the WOSB program) can receive WOSB program certifications at no additional cost. We discuss the WOSB certification process in greater detail later in this report. SBA’s Office of Government Contracting administers the WOSB program by publishing regulations for the program, conducting eligibility examinations of businesses that received contracts under the WOSB or EDWOSB set-aside, deciding protests related to eligibility for a WOSB program contract award, conducting studies to determine eligible industries, and working with other federal agencies in assisting WOSBs and EDWOSBs. According to SBA officials, the agency also works at the regional and local levels with its Small Business Development Centers, district offices, and other organizations (such as Procurement Technical Assistance Centers) to assist WOSBs and EDWOSBs to receive contracts with federal agencies. The services SBA coordinates with these offices and organizations include training, counseling, mentoring, access to information about federal contracting opportunities, and business financing. According to the program regulation, businesses may use self- or third- party certification to demonstrate they are eligible for WOSB or EDWOSB status. Both certification processes require signed representations by businesses about their WOSB or EDWOSB eligibility. For this reason, SBA has described all participants in the program as self-certified. When using the self-certification option, businesses must provide documents supporting their status to the online document repository for the WOSB program that SBA maintains. Required submissions include copies of citizenship papers (birth or naturalization certificates or passports) and, depending on business type, items including copies of partnership agreements or articles of incorporation. Businesses must submit a signed certification on which the owners attest that the documents and information provided are true and accurate. Moreover, businesses must register and attest to being a WOSB in the System for Award Management (SAM), the primary database of vendors doing business with the federal government. Businesses also must make representations about their status in SAM before submitting an offer on a WOSB or EDWOSB solicitation. For third-party certification, businesses submit documentation to approved certifiers. According to third-party certifiers we interviewed, they review documents (and some may conduct site visits to businesses) and make determinations of eligibility. If approved, businesses will receive a document showing receipt of third-party certification. Business then can upload the certificate to the WOSB program repository along with documents supporting their EDWOSB or WOSB status. SBA does not track the number of businesses that self certify and could not provide information on how many self-certified businesses obtained contracts under the WOSB program. While SBA can look at an individual business profile—which lists the documents the business has uploaded to support its eligibility—in the repository to determine if a certificate from a third- party certifier is present, it has no corresponding mechanism to determine if a business lacking such a certificate was self-certified. That is, there are no data fields for certification type in any of the systems used in the program and SBA cannot generate reports to isolate information on certification type by business. According to SBA officials, such information on certification type is not needed because both certification options are treated equally under the program and, because all businesses make an attestation of status as a WOSB whether or not the business uses a third- party certifier. Therefore, SBA considers this a self-certification program. Contracting officers obtain a solicitation and conduct market research to identify businesses potentially capable of filling contract requirements. Once a contracting officer has determined that a solicitation can be set aside under the WOSB program, the officer obtains bids and selects an awardee for the contract. Only after selecting an awardee, does the agency obtain access to the business’s profile in the WOSB program repository, which lists the documents the business has uploaded to support its eligibility (the business must grant the contracting agency access). SBA’s Contracting Officer’s Guide to the WOSB Program states that contracting officers must determine that specified documents have been uploaded by the business to the program repository, but the guide does not require contracting officers to assess the validity of those documents. Only after viewing the uploaded documents would the contracting officer be able to determine if the business was likely self- certified or had a certificate from a third-party certifier. Two groups we interviewed that represent the interests of WOSBs said that contracting officers prefer third-party over self-certified businesses when selecting an awardee. A representative of one organization thought that contracting officers tended to select businesses with third-party certifications because they did not have to review as many documents in the program repository as for self-certified businesses. However, the certification method does not appear to influence contract awards. According to officials from all contracting agencies with whom we spoke and SBA officials, contracting staff are unaware of the certification method used by a business until after an awardee is selected. SBA generally has not overseen third-party certifiers and lacks reasonable assurance that only eligible businesses receive WOSB set- aside contracts. SBA has not put in place formal policies to review the performance of third-party certifiers, including their compliance with a requirement to inform businesses of the no-cost, self-certification option. The agency has not developed formal policies and procedures for reviewing required monthly reports submitted to SBA by certifiers or standardized reporting formats for the certifiers, or addressed most issues raised in the reports. Although SBA examinations have found high rates of ineligibility among a sample of businesses that previously received set- aside contracts, SBA has not determined the causes of ineligibility or made changes to its oversight of certifications to better ensure that only eligible businesses participate in the program. To date, SBA generally has not conducted performance reviews of third- party certifiers and does not have procedures in place for such reviews. According to federal standards for internal control, agencies should conduct control activities such as performance reviews and clearly document internal controls. Third-party certifiers agree to be subject to performance reviews by SBA at any time to ensure that they meet the requirements of the agreement with SBA and program certification regulations—including requirements related to the certification process, obtaining supporting documents, informing businesses about the no-cost option for WOSB program certification, and reporting to SBA on certifier activities. Before beginning the certification process, SBA requires third-party certifiers to inform businesses in writing (on an SBA-developed form) that they can self certify under the program at no cost. Certifiers, a WOSB advocacy group, and WOSBs had perspectives on fees for third-party certification. Representatives of all three certifiers with whom we spoke stated that fees their organization charged for certifications were reasonable and affordable for a small business. Staff from one WOSB advocacy organization told us that such fees could deter some businesses from participating in the program, but owners of WOSBs with which we spoke generally did not concur with this view. Certifiers with whom we spoke told us that they inform businesses about their option to self certify, but SBA does not have a method in place to help ensure that certifiers are providing this information to businesses and agency officials told us that they do not monitor whether certifiers fulfilled the requirement. SBA officials said that they believe that the no-cost option ameliorates the risk of excessive fees charged to businesses or the risk that fees would deter program participation and that because all certifiers must provide national coverage, businesses can seek lower fees. Officials also told us that they believed that businesses and advocacy groups would inform the agency if certifiers were not providing this information. However, they were not able to describe how SBA would learn from businesses that certifiers had failed to provide this information. The requirement is part of SBA’s agreement with third-party certifiers, but SBA has not described the requirement on the program web-site or made it part of informational materials to businesses. Thus, businesses may not know of this requirement without being informed by the certifier or know to inform SBA if the certifier had not fulfilled the requirement. The largest certifier, WBENC, has delegated the majority of certification activity to other entities that SBA also has not reviewed. WBENC has conducted about 76 percent of third-party certifications through May 2014. However, WBENC delegates WOSB certification responsibilities to 14 regional partner organizations. SBA neither maintains nor reviews information about standards and procedures at WBENC, including a compliance review process for each of its 14 partner organizations that WBENC told SBA it uses. SBA officials told us that they rely on information available on public websites to determine the fee structures set by WBENC’s partner organizations. SBA also does not have copies of compliance reviews that WBENC told SBA it annually conducts for each partner organization. SBA requested documents from WBENC, which included information about WBENC’s oversight of its 14 partner organizations. WBENC’s response was incomplete; WBENC referenced but did not provide its standards and procedures to oversee partner organizations. SBA told us it recognized that WBENC’s response was incomplete, and indicated it had not followed up on WBENC’s response. Without this information SBA cannot determine how WBENC has been overseeing the 14 entities to which it has delegated certification responsibilities. Although SBA has not developed or conducted formal performance reviews of certifiers, officials described activities they consider to be certifier oversight. For example, when a business is denied third-party certification but wishes to self-certify, it must subject itself to an eligibility examination by SBA before doing so. In this case, or during a bid protest, SBA conducts its own review of documentation the business submitted to the certifier. SBA officials stated that these reviews were not intended as a form of certifier oversight but described them as de facto reviews of third-party certifier performance. However, such reviews do not involve a comprehensive assessment of certifiers’ activity or performance over time. An SBA official acknowledged that the agency could do more to oversee certifiers. SBA plans to develop written procedures for certifier oversight to be included in the standard operating procedure (SOP) for the program, which remains under development. But SBA has not yet estimated when it would complete written procedures for certifier oversight or the SOP. Without ongoing monitoring and oversight of the activities and performance of third-party certifiers, SBA cannot reasonably ensure that certifiers have fulfilled the performance requirements of their agreement with SBA—including informing businesses about no-cost certification. SBA has not yet developed written procedures to review required monthly reports from certifiers and does not have a consistent format for reports. In SBA’s agreement with third-party certifiers, the agency requires each certifier to submit monthly reports that must include the number of WOSB and EDWOSB applications received, approved, and denied; identifying information for each certified business, such as the business name; concerns about fraud, waste, and abuse; and a description of any changes to the procedures the organizations used to certify businesses as WOSBs and EDWOSBs. Internal control should include documented procedures and monitoring or review activities that help ensure that review findings and deficiencies are brought to the attention of management and resolved promptly. Based on our review of each monthly report submitted from August 2011 through May 2014 (135 in total), not all reports contained consistent information. Some monthly reports were missing the owner names and contact information for businesses that had applied for certification. One certifier regularly identified potential fraud among businesses to which it had denied certification, about one or two per month for 16 of the 34 reporting months included in our review. This certifier provided detailed narrative information in its reports to SBA about its concerns. The reporting format and level of detail reported also varied among certifiers. One certifier listed detailed information on its activities in a spreadsheet. Another described its activities using narrative text and an attached list of applicants for certification. One certifier included dates for certification, recertification, and the expiration of a certification, while other certifiers did not include this information. According to SBA officials, the agency did not have consistent procedures for reviewing monthly reports, including procedures to identify and resolve discrepancies in reports or oversee how certifiers collect and compile information transmitted to the agency. SBA officials said that one official, who recently retired, was responsible for reviewing all certifier monthly reports. Current officials and staff were not able to tell us what process this official used to assess the reports. Finally, with one person responsible for reviewing monthly reports until recently, SBA generally has not followed up on issues raised in reports. Agency officials told us that early in the program they found problems with the monthly report of one of the certifiers that indicated that the certifier did not understand program requirements and they contacted the certifier to address the issue. We found additional issues that would appear to warrant follow up from SBA. For example, two businesses were denied certification by one third-party certifier and approved shortly after by another. SBA stated that it had not identified these potential discrepancies but that it was possible for businesses to be deemed ineligible, resolve the issue preventing certification, and become eligible soon after. However, according to the program regulation, if a business was denied third-party certification and the owner believed the business eligible, the owner would have to request that SBA conduct an examination to verify its eligibility to represent the business as a WOSB. According to SBA officials, the agency was unaware of this business or its certification. And, as discussed previously, one certifier regularly identified potential fraud among businesses to which it had denied certification. SBA officials told us that they had not identified or investigated this certifier’s concerns about potential fraud. When we asked SBA officials how the agency addressed such concerns, an official responded that fraudulently entering into a set-aside contract was illegal and the business would be subject to prosecution. However, without SBA following up on these types of issues, it is unclear how businesses committing fraud in the program would be prosecuted. According to an SBA official, the agency has been developing written procedures to review the monthly reports, but has not yet estimated when the procedures would be completed. The procedures will be included in SBA’s SOP for the program, which also remains under development. As noted earlier, SBA could not estimate when it would complete the SOP. Without procedures in place to consistently review monthly reports and respond to problems identified in those reports, SBA lacks information about the activities and performance of third-party certifiers and leaves concerns raised by certifiers unaddressed. SBA’s methods to verify the eligibility of businesses in its WOSB program repository include annual examinations of businesses that received set- aside contracts. SBA’s program responsibilities include conducting eligibility examinations of WOSBs and EDWOSBs, according to SBA’s compliance guide for the WOSB program and its regulation. Section 8(m) of the Small Business Act sets forth eligibility criteria businesses must meet to receive a contract under the WOSB program set-aside. SBA examines a sample of businesses with a current attestation in SAM and that received a contract during SBA’s examination year. SBA does not include in its sample businesses that had not yet obtained a WOSB program contract. According to SBA officials, staff conducting the eligibility examination review the documents each business owner uploaded to the WOSB program repository to support the representation in SAM of eligibility for WOSB or EDWOSB status. For example, agency officials said that reviewers ensure that all documents required have been uploaded and review the contents of the documents to ensure that a business is eligible. SBA said staff conducting the examination then determine that the business has met the requirements to document its status as a WOSB, or determine that information is missing or not consistent with the program requirements and the business is not eligible at the time of SBA’s review to certify itself as a WOSB. SBA officials said the agency also uses the same process to investigate the eligibility of businesses on an ad hoc basis in response to referrals from contracting agencies or other parties, such as other businesses, that question the eligibility of a business. If a business has not sufficiently documented its eligibility representation, SBA sends a letter directing the business to enter required information or documents into the repository or remove its attestation of program eligibility in SAM within 15 days. If SBA receives no response after 15 days, it sends a second letter instructing the business to remove its WOSB attestation in SAM within 5 days. In 2012 and 2013, SBA sent final 5-day letters to 44 businesses identified through annual examinations or examinations following a referral. If the business does not do so, it may be subject to enforcement actions including suspension or debarment from federal contracting or criminal penalties, according to SBA officials. An SBA official said that the agency is unaware of any such enforcement actions as part of the WOSB program. SBA also decides protests from contracting agency staff or any other interested parties relating to a business’s eligibility. SBA considers protests if there is sufficient, credible evidence to show that the business may not be at least 51 percent owned and controlled by one or more women, or if the business has failed to provide documents required to establish eligibility for the program. Once SBA has obtained a protest, it examines documents submitted in the case, makes a determination of program eligibility based on the content of these documents and notifies relevant parties—typically, the contracting officer, protester (if not the same), and the business—of the determination. If eligible for the set- aside, the contracting officer may make an award to the business. Otherwise, the contracting officer may not award a contract to the business in question. From program implementation in April 2011 through July 2, 2014, SBA responded to 27 protests, and in 7 protests the businesses involved were found to be ineligible for the WOSB program. In the remaining protests, the businesses were found eligible, the party that filed the protest withdrew it, or SBA dismissed the protest. As described earlier in the report, contracting officers check for the presence of documents in the repository when making a WOSB program award. This could be considered part of SBA’s framework to oversee certifications, but the requirement for contracting officers to review documents is limited to ensuring that businesses have uploaded documents listed in the regulation. Representatives from some of the contracting offices we interviewed believed that they had to assess the validity of the documents or did not think they had the necessary qualifications to assess the documents. However, program guidance does not require contracting officers to assess the validity of these documents, and SBA officials told us contracting officers are not expected to evaluate the eligibility of businesses. SBA activities relating to eligibility verifications, particularly examinations, have several weaknesses. For instance, SBA has not yet developed procedures to conduct annual eligibility examinations although such efforts are in process, according to officials; has not evaluated the results of the eligibility examinations in the context of how the actions of businesses, contracting agencies, and third-party certifiers may have contributed to the high levels of incomplete and inaccurate documentations found in examinations; and has not assessed its internal controls or made procedural changes in response to the findings of its eligibility examinations. According to federal standards for internal control, agencies should have documented procedures, conduct monitoring, and ensure that any review findings and deficiencies are brought to the attention of management and are resolved promptly. Corrective action is to be taken or improvements made within established time frames to resolve the matters brought to management’s attention. Also, management needs to comprehensively identify risks the agency faces from both internal and external sources, and management should consider all significant interactions between the agency and all other parties. SBA conducted annual eligibility examinations in 2012 and 2013 on a sample of businesses that received contracts under the WOSB program and found that 42 percent of businesses in the 2012 sample were ineligible for WOSB program contract awards on the date of its review, and 43 percent in the 2013 sample were ineligible. According to SBA officials, both self- and third-party certified businesses were found ineligible at the time of review. SBA staff reviewed the documents that each business in its sample had posted to the program repository to ensure the businesses had sufficiently supported their attestations as required in program regulations. However, SBA could not provide documentation of a consistent procedure to examine each business. SBA staff reviewing documentation in the repository did not have guidelines describing how to conduct each review. SBA officials told us that they have been developing written procedures to conduct annual eligibility examinations, estimated a completion date that the agency did not meet, and that the agency does not have an estimation of completion. SBA officials explained that they determined the eligibility of businesses on a given date after the business received a contract. According to SBA officials, a finding of ineligibility does not mean the business was ineligible at the time of contract award because the status of the business might have changed. Although SBA officials did not know whether businesses examined were eligible at the time of award, the high rate of ineligibility it found raises questions about whether contracts may have been awarded to ineligible businesses. According to SBA officials, information in its repository constantly changes and SBA has yet to determine how or if a business was eligible when it received a WOSB set-aside contract. SBA officials told us that they believe they may be able to make such a determination but could not describe exactly how they would conduct the review or confirm that the business was an eligible WOSB or EDWOSB at the time of award. As part of its annual examination, SBA only examines businesses at some time after the business received a contract and, therefore, SBA’s examination is limited in its ability to identify potentially ineligible businesses prior to a contract award. SBA officials said that after the annual examinations they did not institute new controls to guard against ineligible businesses receiving program contracts because they described the examinations and the results as a method to gain insight about the program—specifically, that WOSBs may lack understanding of program eligibility requirements—and not a basis for change in oversight procedures. According to SBA officials, the levels of ineligibility found during the examinations were similar to those found in examinations of its other socioeconomic programs. SBA officials said businesses were deemed ineligible because they did not understand the documentation requirements for establishing eligibility and also attributed the ineligibility of third-party certified businesses to improper uploading of documents by the businesses themselves. SBA officials said they needed to make additional efforts to train businesses to properly document their eligibility. However, SBA officials could not explain how they had determined lack of understanding was the cause of ineligibility among businesses and have not made efforts to confirm that this was the cause. As a result, they have missed opportunities to obtain meaningful insights into the program. SBA regarded the bid protest as means of identifying ineligibility. SBA officials referred to the program as a self-policing program, because of the bid protest function through which competing businesses, contracting officers, or SBA can protest a business’s claim to be a WOSB or EDWOSB and eligible for contract awards under the program. In addition, an SBA official stated that business owners affirm their status when awarded a contract and are subject to prosecution if they had done so and later were found to have been ineligible at the time of contract award—which the official considered a program safeguard. However, without (1) developing program eligibility controls that include procedures for conducting annual eligibility examinations; (2) analyzing the results of the examinations to understand the underlying causes of ineligibility; (3) developing new procedures for examinations, including expanding the sample of businesses to be examined to include those that did not receive contracts; and (4) investigating businesses based on examination results, SBA may continue to find high rates of ineligibility among businesses registered in the WOSB program repository. In turn, this would continue to expose the program to the risk that ineligible businesses may receive set-aside contracts. Also, by reviewing the eligibility of businesses that have not received program contracts, SBA may improve the quality of the pool of potential program award recipients. Set-asides under the WOSB program to date have had a minimal effect on overall contracting obligations to WOSBs and attainment of WOSB contracting goals. WOSB program set-aside obligations increased from fiscal year 2012 to fiscal year 2013. The Department of Defense (DOD), the Department of Homeland Security (DHS), and the General Services Administration (GSA) accounted for the majority of these obligations. The WOSB program set-asides represented less than 1 percent of total federal awards to women-owned small businesses. Contracting officers, WOSBs, and others with whom we spoke suggested a number of program changes that might increase use of the WOSB program, including increasing awareness, allowing for sole-source awards, and expanding the list of eligible industries for the set-aside program. WOSB program set-aside obligations increased from fiscal year 2012 to fiscal year 2013. Obligations to WOSBs under the WOSB set-aside program increased from $33.3 million in 2012 to $39.9 million in 2013, and obligations to EDWOSBs increased from $39.2 million in 2012 to $60.0 million in 2013. The National Defense Authorization Act for Fiscal Year 2013 removed the dollar cap on contract awards eligible under the WOSB set-aside program, which may account for some of the increase in obligations from 2012 to 2013. SBA officials told us that they expect increased use of the program in the future as a result of this change. As shown in table 2, three federal agencies—DOD, DHS, and GSA— collectively accounted for the majority of the obligations awarded under the set-aside program. DOD (Air Force, Army, Navy, and all other defense agencies) accounted for 62.2 percent of obligations, DHS for 10.7 percent, and GSA for 4.0 percent of obligations. No other individual agency accounted for more than 3.4 percent of obligations awarded under the program. From April 2011 through May 2014, WOSB program set-asides constituted a very small percentage (0.44 percent) of all the contracting obligations awarded to WOSBs (see fig. 1). The majority of obligations awarded to WOSBs were made under other, longer-established set-aside programs. For example, if eligible, a WOSB could receive a contracting award under the 8(a), HUBZone, or SDVOSBC programs, or through a general small business set-aside. WOSBs also can obtain federal contracts without set-asides (through open competition). Based on our analysis of FPDS-NG data of federal contracting agencies, contract obligations awarded through the WOSB set-aside totaled $228.9 million, or 0.44 percent, of the $52.6 billion in contract obligations awarded to WOSBs from April 2011 through May 2014. Additionally, the WOSB set-aside has had relatively little impact on federal agency achievement of goals for contracting to WOSBs, because the program set-asides represent a very small percentage of all contracting awards to WOSBs. Since 2011, the overall percentage of contracting obligations awarded to WOSBs (through any program or open competition) has remained below the government-wide goal of 5 percent (see table 3). Goal achievement by the three contracting agencies with the highest amount of obligations through the set-aside program varied. For example, DOD did not meet its 5 percent goal for contracting obligations to WOSBs in any of the 3 years. DHS and GSA met their goals in all 3 years. Excluding obligations made by DOD, about 5.7 percent of total federal contracting obligations to small businesses included in SBA’s fiscal year 2013 Small Business Goaling Report were awarded to WOSBs. For the 24 agencies subject to the Chief Financial Officers Act listed in SBA’s scorecards, 19 met their WOSB contracting goal in fiscal year 2012 and 20 met their goal in fiscal year 2013. One agency missed its goal in fiscal year 2012 but met its goal in fiscal year 2013. Four agencies (the same four each year) did not meet their goal for either year. Selected federal contracting officials, businesses that received a WOSB or EDWOSB set-aside, third-party certifiers, and a WOSB advocacy organization with which we spoke gave their perspectives on existing challenges and possible changes to increase program usage. Complexity and burdensome requirements. Contracting officers described challenges to using the WOSB set-aside. Some contracting officers noted that generally, all contracts awarded to WOSBs count for the purposes of meeting agencies’ 5 percent goal and that from their perspective it does not matter whether a contract is awarded to a WOSB using the WOSB program, another set-aside program, or open competition. Some contracting officers said that WOSB program requirements were burdensome or complex relative to other SBA programs with set-asides. Unlike the other programs, the WOSB program requires the use of a separate electronic repository, maintained by SBA, to collect and store certification documents. One contracting officer noted that the contracting process slowed when officials had to seek information from the repository. Another contracting officer told us the role of the contracting officer included confirming that businesses had uploaded required documents in the SBA repository based on a list of required documents in the program regulation—but noted this task was not required under other contracting programs. Lack of awareness and agency commitment. Representatives from advocacy groups also identified awareness of and commitment to the program as another area for improvement. An advocacy group representative told us that some of their member WOSBs had encountered confusion and reluctance on the part of contracting officers to use the program. Another advocacy group said that SBA should engender more commitment to the program among contracting officers and agencies. Another representative noted that there are no consequences for agency leaders for failure to meet contracting goals for WOSBs or use the set-aside program. SBA officials described to us consequences that included a low rating in the publicly available SBA contracting scorecard, which may draw negative attention to the agency. Also, the National Defense Authorization Act for Fiscal Year 2013 includes the extent to which agencies meet contracting goals as a competency by which members of the senior executive service are rated. All of the businesses we interviewed that received WOSB program contracts cited the need for increased agency outreach or awareness of the program. For example, one participant advocated increasing contracting officer awareness and understanding of how an agency could benefit from using the WOSB set-aside program. Changes to increase use of program. Contracting officers also identified changes they believe could increase use of the WOSB set- aside. For example, some noted that allowing sole-source contracts could increase program use. Currently, contract officers can establish a set- aside only if there is a reasonable expectation that at least two eligible WOSBs will submit a bid for the contract. Some contracting officers suggested expanding the list of North American Industry Association Classification System (NAICS) codes eligible for use under the WOSB set-aside. For example, one contracting office said that the designated NAICS for the set-aside program did not meet their procurement needs. One representative pointed out that SBA had designated some NAICS codes just for EDWOSB and others for WOSBs. SBA officials told us the agency does not have the authority to change the list of industry sectors eligible for program set-asides without conducting a study of industries in which WOSBs were underrepresented or substantially underrepresented. Representatives from all of the WOSB advocacy groups, three of which are also third-party certifiers, said that expanding the NAICS codes would improve the program. For example, one advocacy group said that certain WOSBs would like to obtain WOSB or EDWOSB set-asides but did not have NAICS codes that were listed as eligible. Another said that they would not limit the number of eligible industries under the program. Finally, the businesses we interviewed also believed that allowing sole- source awards or adding more NAICS codes would increase program use. Six participants commented on the limitations for awarding sole- source contracts through the WOSB set-aside. Five participants felt that the NAICS codes under the program were limited. One program participant mentioned that she felt that limiting set-asides for the WOSB program to certain NAICS codes was inconsistent with other SBA programs with set-asides, such as 8(a), HUBZone, and SDVOSBC. She gave an example of an agency that issued a draft solicitation that sought to award two contracts each to WOSB set-asides, HUBZone, and SDVOSBC businesses. However when it became clear that the contract was not in an eligible NAICS code for the WOSB program, the agency converted the two contracts intended for WOSB set-aside to a general small business category. Some program participants also mentioned positive aspects of the program. Five participants believed that the program provided greater opportunities for their businesses and WOSBs in general. Furthermore, five of the six businesses with whom we spoke that received only one or two contracts felt that the program improved their ability to compete for a federal contract. For example, one participant noted that while she has not seen many set-aside solicitations for the NAICS code under which her business primarily operates, the existence of the program prompted her to bid on set-asides under other NAICS codes. As the only federal procurement set-aside specifically for women-owned businesses, the WOSB program could play an important role in limiting competition to certain federal contracts for WOSBs and EDWOSBs that are underrepresented in their industries. However, weaknesses in multiple areas of SBA’s management of the program hinder effective oversight of the WOSB program. Specifically, SBA has limited information about the performance of its certifiers and does not use what information is available to help ensure certifiers adhere to program requirements, a deficiency exacerbated by the highest-volume certifier’s—about 76 percent of third-party certifications—delegation of duties to 14 partner organizations. An incomplete response to SBA’s request for information on WBENC’s certification process demonstrates the need for an oversight framework to ensure that certifiers adhere to agreements with SBA. SBA did not follow up on the incomplete response from WBENC, which raises questions about SBA’s commitment to oversight of the certifiers. Furthermore, the lack of procedures for review and analysis of monthly certifier reports means that SBA has forgone opportunities to oversee certifiers and pursue concerns about fraud of individual businesses identified by one certifier. According to federal standards for internal control, agencies should conduct control activities such as performance reviews and clearly document internal controls. Formalizing existing ad hoc processes (by developing procedures) will help SBA obtain the information necessary to better ensure that third-party certifiers fulfill the requirements of their agreements with SBA—an effort SBA said it plans to undertake, although it has not estimated a completion date. Additionally, SBA could use results and insights from reviews of certifier reports— which are to include concerns about businesses—to inform its processes for eligibility verification, particularly examinations. Weaknesses related to SBA’s examination of program participants and approach to enforcement mean that the agency cannot offer reasonable assurance that only eligible businesses participate in the program. Although the agency’s examinations found high rates of ineligibility, SBA has not yet formalized examination guidance for staff or followed up on examination results to determine the status of ineligible businesses at the time of contract award. SBA also has not focused on identifying factors that may be causing businesses to be found ineligible; rather, the agency appears to have determined that more training for businesses about eligibility requirements could address the issue. However, training alone would be a limited response to examination results, and SBA officials could not say what analysis determined training to be the relevant response. Additionally, the sample of businesses that SBA examines includes only those businesses that received WOSB set-aside contracts. All these factors limit SBA’s ability to better understand the eligibility of businesses before applying for and being awarded contracts. Rather than gather and regularly analyze information related to program eligibility, SBA relies on other parties to identify potential misrepresentation of WOSB status (through bid-protest filings and less formal mechanisms)—a reactive and limited approach to oversight. Federal standards for internal control state that agencies should have documented procedures, conduct monitoring, and ensure that any review findings and deficiencies are brought to the attention of management and are resolved promptly. Additionally, the standards state that management needs to comprehensively identify risks the agency faces from both internal and external sources. By expanding its examination of firms and analyzing and following up on the results, SBA could advance the key program goal of restricting competition for set-aside contracts to WOSBs and EDWOSBs. We make the following recommendations to improve management and oversight of the WOSB program. To help ensure the effective oversight of third-party certifiers, the Administrator of SBA should establish and implement comprehensive procedures to monitor and assess performance of certifiers in accord with the requirements of the third-party certifier agreement and program regulations. To provide reasonable assurance that only eligible businesses obtain WOSB set-aside contracts, the Administrator of SBA should enhance examination of businesses that register to participate in the WOSB program, including actions such as: promptly completing the development of procedures to conduct annual eligibility examinations and implementing such procedures; analyzing examination results and individual businesses found to be ineligible to better understand the cause of the high rate of ineligibility in annual reviews, and determine what actions are needed to address the causes; and implementing ongoing reviews of a sample of all businesses that have represented their eligibility to participate in the program. We provided a draft of this report to SBA, DHS, DOD, and GSA for review and comment. SBA provided written comments that are described below and reprinted in appendix II. The other agencies—DHS, DOD, and GSA—did not provide comments on this report. SBA generally agreed with our recommendations and said that the agency is already in the process of implementing many of our recommendations. While SBA generally agreed with our recommendations, the agency stated that the report could be clearer about the program examination process. Specifically, SBA stated that the agency has authority to conduct eligibility examinations at any time for any firm asserting eligibility to receive WOSB program contracts. We have added information to the draft to clarify this point. The draft report we sent to SBA for comment discussed the agency’s process of conducting annual eligibility examinations and provided a description of SBA’s current process. SBA also stated that “the report recommends that SBA conduct ongoing annual eligibility examinations and implement such procedures.” However, our report recommends that SBA complete the development of procedures to conduct annual eligibility examinations (which SBA has conducted for the past 2 years) and implement such procedures. We separately recommend implementing ongoing reviews of a sample of all businesses that have represented their eligibility to participate in the program. We do not specify that these eligibility reviews, which are eligibility examinations, should be annual. SBA could choose to conduct these reviews more frequently if deemed appropriate. Whether SBA conducts eligibility examinations annually or more frequently, examinations should be consistently conducted by following written procedures and the results assessed to determine the causes of ineligibility. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to appropriate congressional committees and members, the Secretary of DOD, the Secretary of DHS, the Administrator of GSA, the Administrator of SBA, and other interested parties. This report will also be available at no charge on our website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This report examines the Women-Owned Small Business (WOSB) program of the Small Business Administration (SBA). More specifically, the report (1) describes how WOSBs and economically disadvantaged WOSBs (EDWOSBs) are certified as eligible for the program, (2) examines the extent to which SBA has implemented internal control and oversight procedures of WOSB program certifications, and (3) discusses the effect the program has had on federal contracting opportunities available to WOSBs or EDWOSBs. To describe how businesses are certified as eligible for the program, we reviewed SBA policies and procedures to establish program eligibility including the responsibilities of businesses, third-party certifiers, contracting officers, and SBA. We interviewed SBA officials from the Office of Government Contracting. To evaluate how certification procedures may affect program participation, we obtained from SBA monthly reports (from September 2011 through May 2014) from each of the four third-party certifiers. We took steps to develop a dataset we could use for our analyses, including creating and merging monthly spreadsheets, identifying missing business names, and clearing the list of duplicate entries. We compared this dataset with Federal Procurement Data System-Next Generation (FPDS-NG) data for businesses that received a WOSB program set-aside contract. We determined that the data on how many third-party certified businesses received contracts as part of the WOSB program were sufficiently reliable for our purposes by corroborating a sample of businesses we identified as third-party certified with documentation for the businesses in the WOSB program repository. We were not able to determine how many self-certified businesses obtained contracts under the program, because the format of the documentation maintained in the SBA repository does not include a record of documents that were present at the time of contract award. We also interviewed a sample of contracting officers from selected components in the Department of Defense (DOD), Department of Homeland Security (DHS), and the General Services Administration (GSA). We selected these three agencies to represent a range of program participation based on the number and total obligation amounts of active set-aside contracts awarded in 2011 through 2013. Within DOD and DHS, we selected two components from each that demonstrated high- and mid-level program participation (based on number of contracts and obligation amounts). For DOD, we selected the U.S. Army and Defense Logistics Agency. For DHS, we selected the U.S. Coast Guard, and Customs and Border Protection. Within each of the components and GSA, we compared FPDS-NG data on program activity by obligation amount, contract number, and North American Industry Classification System (NAICS) codes for 2011 through 2013. For each, we selected two contracting offices using the same criteria we used to select agencies, which included identifying a high- and mid-level program obligation amount and offices with multiple contracts and under multiple NAICS codes. We excluded one Customs and Border Patrol office because only one office awarded multiple contracts under multiple NAICS codes. We also interviewed three of the four SBA-approved third-party certifiers (the El Paso Hispanic Chamber of Commerce, the National Women Business Owners Corporation, and the U.S. Women’s Chamber of Commerce). We were unable to interview the Women’s Business National Enterprise Council (WBENC). SBA requested documentation of WBENC’s oversight procedures for the certification activity and fee structures of its regional partner organizations. WBENC provided a written response to SBA, which was not fully responsive to the request, as discussed in the report. We conducted semi-structured interviews with a sample of 10 businesses that were certified for the program, 9 of which had received a set-aside contract. To evaluate SBA’s oversight of certification, we reviewed the program regulation and program documents, agreements with third-party certifiers, 135 monthly reports submitted by all four third-party certifiers, and letters SBA sends to inform businesses when their WOSB or EDWOSB status is in question, among other documents. We discussed the agency’s procedures to monitor certifiers and ensure participant eligibility with SBA officials from the Office of Government Contracting. We compared officials’ descriptions of their oversight activities with federal internal control standards. We inquired about documentation and eligibility examinations conducted in 2012 and 2013, and a planned examination for 2014, and reviewed reports of the 2012 and 2013 examination results. We also inquired about ongoing plans to develop a standard operating procedure, and future plans to evaluate the program. To determine what effect, if any, the WOSB program has had on federal contracting opportunities available to WOSBs, we identified set-aside contract obligations in FPDS-NG from April 2011 through May 2014 to identify trends in program participation by contracting agencies included in both FPDS-NG and SBA goaling reports. Using a review of FPDS-NG documentation and electronic edit checks, we deemed these data sufficiently reliable for our purposes. We also analyzed SBA goaling reports from 2011 through 2013 to describe progress made towards meeting the 5 percent goal for federal contracting to WOSBs. We conducted semi-structured interviews with a sample of 10 businesses there were certified for the program, 9 of which had received a set-aside contract. We selected this nongeneralizable sample of businesses to reflect whether they had been certified by a third-party entity, or had self- certified. While the results of these interviews could not be generalized to all WOSB program participants, they provided insight into the benefits and challenges of the program. We interviewed SBA officials and contracting agency officials about the extent to which the program has met its statutory purpose of increasing contracting opportunities for WOSBs. Finally, we interviewed industry advocates, including three of the four third-party certifiers (the El Paso Hispanic Chamber of Commerce, the National Women Business Owners Corporation, and U.S. Women’s Chamber of Commerce) and one other industry advocate (Women Impacting Public Policy) actively involved in promoting the program with WOSBs. We conducted this performance audit from August 2013 to October 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Andrew Pauline (Assistant Director), Julie Trinder-Clements (analyst-in-charge), Pamela Davidson, Daniel Kaneshiro, Julia Kennon, Barbara Roesmann, Jessica Sandler, and Jena Sinkfield made key contributions to this report. | In 2000, Congress authorized the WOSB program to increase contracting opportunities for WOSBs by allowing contracting officers to set aside procurements to such businesses. SBA, which administers the program, issued implementing regulations that became effective in 2011. GAO was asked to review the WOSB program. This report examines (1) how businesses are certified as eligible for the WOSB program, (2) SBA's oversight of certifications, and (3) the effect the program has had on federal contracting opportunities available to WOSBs or EDWOSBs. GAO reviewed relevant laws, regulations, and program documents; analyzed federal contracting data from April 2011 through May 2014; and interviewed SBA, officials from contracting agencies selected to obtain a range of experience with the WOSB program, third-party certifiers, WOSBs, and organizations that represent their interests. Businesses have two options to certify their eligibility for the women-owned small business (WOSB) program. Whether self-certifying at no cost or using the fee-based services of an approved third-party certifier, businesses must attest that they are a WOSB or an economically disadvantaged WOSB (EDWOSB). Businesses also must submit documents supporting their attestation to a repository the Small Business Administration (SBA) maintains (required documents vary depending on certification type), and, if they obtain a third-party certification, to the certifier. SBA performs minimal oversight of third-party certifiers and has yet to develop procedures that provide reasonable assurance that only eligible businesses obtain WOSB set-aside contracts. For example, SBA generally has not reviewed certifier performance or developed or implemented procedures for such reviews, including determining whether certifiers inform businesses of the no-cost self-certification option, a requirement in the agency's agreement with certifiers. SBA also has not completed or implemented procedures to review the monthly reports that third-party certifiers must submit. Without ongoing monitoring and oversight of the activities and performance of third-party certifiers, SBA cannot reasonably assure that certifiers fulfill the requirements of the agreement. Moreover, in 2012 and 2013, SBA found that more than 40 percent of businesses (that previously received contracts) it examined for program eligibility should not have attested they were WOSBs or EDWOSBs at the time of SBA's review. SBA officials speculated about possible reasons for the results, including businesses not providing adequate documentation or becoming ineligible after contracts were awarded, but SBA has not assessed the results of the examinations to determine the actual reasons for the high numbers of businesses found ineligible. SBA also has not completed or implemented procedures to conduct eligibility examinations. According to federal standards for internal control, agencies should have documented procedures, conduct monitoring, and ensure that any review findings and deficiencies are resolved promptly. As a result of inadequate monitoring and controls, potentially ineligible businesses may continue to incorrectly certify themselves as WOSBs, increasing the risk that they may receive contracts for which they are not eligible. The WOSB program has had a limited effect on federal contracting opportunities available to WOSBs. Set-aside contracts under the program represent less than 1 percent of all federal contract obligations to women-owned small businesses. The Departments of Defense and Homeland Security and the General Services Administration collectively accounted for the majority of the $228.9 million in set-aside obligations awarded under the program between April 2011 and May 2014. Contracting officers, business owners, and industry advocates with whom GAO spoke identified challenges to program use and suggested potential changes that might increase program use, including allowing sole-source contracts rather than requiring at least two businesses to compete and expanding the list of 330 industries in which WOSBs and EDWOSBs were eligible for a set-aside. GAO recommends that SBA, among other things, establish and implement procedures to monitor certifiers and improve annual eligibility examinations, including by analyzing examination results. SBA generally agreed with GAO's recommendations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Since December 5, 1989, DOE has not produced War Reserve pits for the nuclear stockpile. On that date, the production of pits at Rocky Flats, which was DOE’s only large-scale pit-manufacturing facility, was suspended because of environmental and regulatory concerns. At that time, it was envisioned that production operations would eventually resume at the plant, but this never occurred. In 1992, DOE closed its pit-manufacturing operations at Rocky Flats without establishing a replacement location. In 1995, DOE began work on its Stockpile Stewardship and Management Programmatic Environmental Impact Statement, which analyzed alternatives for future DOE nuclear weapons work, including the production of pits. In December 1996, Los Alamos was designated as the site for reestablishing the manufacturing of pits. DOE is now reestablishing its capability to produce War Reserve pits there so that pits removed from the existing stockpile for testing or other reasons can be replaced with new ones. Reestablishing the manufacturing of pits will be very challenging because DOE’s current efforts face new constraints that did not exist previously. For example, engineering and physics tests were used in the past for pits produced at Rocky Flats to ensure that those pits met the required specifications. Nuclear tests were used to ensure that those pits and other components would perform as required. While engineering and physics tests will still be utilized for Los Alamos’s pits, the safety and reliability of today’s nuclear stockpile, including newly manufactured pits, must be maintained without the benefit of underground nuclear testing. The United States declared a moratorium on such testing in 1992. President Clinton extended this moratorium in 1996 by signing the Comprehensive Test Ban Treaty, through which the United States forwent underground testing indefinitely. In addition, to meet regulatory and environmental standards that did not exist when pits were produced at Rocky Flats, new pit-production processes are being developed at Los Alamos. DOD is responsible for implementing the U.S. nuclear deterrent strategy, which includes establishing the military requirements associated with planning for the stockpile. The Nuclear Weapons Council is responsible for preparing the annual Nuclear Weapons Stockpile Memorandum, which specifies how many warheads of each type will be in the stockpile. Those weapons types expected to be retained in the stockpile for the foreseeable future are referred to as the enduring stockpile. DOE is responsible for managing the nation’s stockpile of nuclear weapons. Accordingly, DOE certifies the safety and reliability of the stockpile and determines the requirements for the number of weapons components, including pits, needed to support the stockpile. DOE has made important changes in the plans for its pit-manufacturing mission. Additionally, some specific goals associated with these plans are still evolving. In December 1996, DOE’s goals for the mission were to (1) reestablish the Department’s capability to produce War Reserve pits for one weapons system by fiscal year 2001 and to demonstrate the capability to produce all pit types for the enduring stockpile, (2) establish a manufacturing capacity of 10 pits per year by fiscal year 2001 and expand to a capacity of up to 50 pits per year by fiscal 2005, and (3) develop a contingency plan for the large-scale manufacturing of pits at some other DOE site or sites. In regard to the first goal, DOE and Los Alamos produced a pit prototype in early 1998 and believe they are on target to produce a War Reserve pit for one weapons system by fiscal year 2001. In regard to the second goal, DOE has made important changes. Most notably, DOE’s capacity plans have changed from a goal of 50 pits per year in fiscal year 2005 to 20 pits per year in fiscal 2007. What the final production capacity at Los Alamos will be is uncertain. Finally, DOE’s efforts to develop a contingency plan for large-scale production have been limited and when such a plan will be in place is not clear. To meet the first goal of reestablishing its capability to produce a War Reserve pit for a particular weapons system by fiscal year 2001, DOE has an ambitious schedule. This schedule is ambitious because several technical, human resource, and regulatory challenges must be overcome. Approximately 100 distinct steps or processes are utilized in fabricating a pit suitable for use in the stockpile. Some of the steps in manufacturing pits at Los Alamos will be new and were not used at Rocky Flats. Each of these manufacturing processes must be tested and approved to ensure that War Reserve quality requirements are achieved. The end result of achieving this first goal is the ability to produce pits that meet precise War Reserve specifications necessary for certification as acceptable for use in the stockpile. Skilled technicians must also be trained in the techniques associated with the pit-manufacturing processes. Currently, according to DOE and Los Alamos officials, several key areas remain understaffed. According to a Los Alamos official, the laboratory is actively seeking individuals to fill these positions; however, the number of qualified personnel who can perform this type of work and have the appropriate security clearances is limited. Finally, according to DOE and Los Alamos officials, the production of pits at Los Alamos will be taking place in a regulatory environment that is more stringent than that which existed previously at Rocky Flats. As a result, new processes are being developed, and different materials are being utilized so that the amount and types of waste can be reduced. Los Alamos achieved a major milestone related to its first goal when it produced a pit prototype on schedule in early 1998. DOE and Los Alamos officials believe they are on schedule to produce a War Reserve pit for one weapons system by fiscal year 2001. DOE plans to demonstrate the capability to produce pits for other weapons systems but does not plan to produce War Reserve pits for these systems until sometime after fiscal year 2007. Furthermore, DOE’s Record of Decision stated that Los Alamos would reestablish the capability to manufacture pits for all of the weapons found in the enduring stockpile. Currently, however, according to DOE officials, DOE does not plan to reestablish the capability to produce pits for one of the weapons in the enduring stockpile until such time as the need for this type of pit becomes apparent. Once Los Alamos demonstrates the capability to produce War Reserve pits, it plans on establishing a limited manufacturing capacity. Originally, in late 1996, DOE wanted to have a manufacturing capacity of 10 pits per year by fiscal year 2001 and planned to expand this capacity to 50 pits per year by fiscal 2005. In order to achieve a 10-pits-per-year manufacturing capacity by fiscal year 2001, DOE was going to supplement existing equipment and staff in the PF-4 building at Los Alamos. To achieve a capacity of 50 pits per year by fiscal year 2005, DOE planned a 3-year suspension of production in PF-4 starting in fiscal year 2002. During this time, PF-4 would be reconfigured to accommodate the larger capacity. Also, some activities would be permanently moved to other buildings at Los Alamos to make room for the 50-pits-per-year production capacity. For example, a number of activities from the PF-4 facility would be transferred to the Chemistry and Metallurgy Research building. Once PF-4 was upgraded, it would be brought back on-line with a production capacity of 50 pits per year. In December 1997, DOE’s new plan changed the Department’s goal for implementing the limited manufacturing capacity. DOE still plans to have a 10-pits-per-year capacity by fiscal year 2001. However, DOE now plans to increase the capacity to 20 pits per year by fiscal year 2007. If DOE decides to increase production to 50 pits per year, it would be achieved sometime after fiscal year 2007. As with the original plan, in order to achieve a 50-pits-per-year capacity, space for manufacturing pits in PF-4, which is now shared with other activities, would have to be completely dedicated to the manufacturing of pits. DOE officials gave us a number of reasons for these changes. First, because the original plan required a 3-year shutdown of production in PF-4, DOE was concerned that there would not be enough pits during the shutdown to support the stockpile requirement, considering that pits would have been destructively examined under the stockpile surveillance program.Under the new plan, annual production will continue except for 3-or 4-month work stoppages during some years to allow for facility improvements and maintenance. Second, DOE was concerned that pits produced after the originally planned 3-year shutdown might need to be recertified. Third, DOE wanted to decouple the construction activities at the Chemistry and Metallurgy Research building from planned construction at PF-4 because linking construction projects at these two facilities might adversely affect the pit-manufacturing mission’s schedule. DOE’s 1996 plan called for developing a contingency plan to establish a large-scale (150-500 pits per year) pit-manufacturing capacity within 5 years, if a major problem were found in the stockpile. DOE has done little to pursue this goal. It has performed only a preliminary evaluation of possible sites. DOE has not developed a detailed contingency plan, selected a site, or established a time frame by which a plan should be completed. According to DOE officials, they will not pursue contingency planning for large-scale manufacturing until fiscal year 2000 or later. The purpose for the contingency plan was to lay out a framework by which DOE could establish a production capacity of 150 to 500 pits per year within a 5-year time frame. Such a capacity would be necessary if a systemwide problem were identified with pits in the stockpile. This issue may become more important in the future, as existing nuclear weapons and their pits are retained in the stockpile beyond their originally planned lifetime. Research is being conducted on the specific effects of aging on plutonium in pits. A DOE study found that Los Alamos is not an option for large-scale pit manufacturing because of space limitations that exist at PF-4. As a result, large-scale operations would most likely be established at some other DOE nuclear site(s) where space is adequate and where some of the necessary nuclear infrastructure exists. DOE has not specified a date by which the plan will be completed, and, according to DOE officials, the contingency plan has not been a high priority within DOE for fiscal years 1998-99. According to DOE officials, they may fund approximately $100,000 for a study of manufacturing and assembly processes for large-scale manufacturing in fiscal year 1999. In addition, according to DOE officials, DOE has not pursued contingency planning for large-scale manufacturing more aggressively because the Department would like more work to be done at PF-4 prior to initiating this effort. In this regard, the officials stated that the development of a contingency plan requires more complete knowledge of the processes, tooling, and technical skills still being put in place at Los Alamos. This knowledge will serve as a template for large-scale manufacturing. DOE believes that this knowledge should be well defined by fiscal year 2000. According to information from DOE, the total cost for establishing and operating the pit-manufacturing mission under its new plan will be over $1.1 billion from fiscal year 1996 through fiscal 2007. This estimate includes funds for numerous mission elements needed to achieve DOE’s goals. This estimate does not include over $490 million in costs for other activities that are not directly attributable to pit production but are needed to support a wide variety of activities, including the pit-manufacturing mission. Some key controls related to the mission are either in the formative stages of development or do not cover the mission in its entirety. DOE provided us with data reflecting the total estimated costs of its new plans and schedules. These data were developed for the first time during our audit. DOE emphasized that these costs should be treated as draft estimates instead of approved numbers. On the basis of this information, the costs for establishing and operating the pit-manufacturing mission were estimated to total over $1.1 billion from fiscal year 1996 through fiscal 2007. Table 1 shows the total estimated costs related to the various elements of the mission. At the time of our review, DOE estimated that by the end of fiscal year 1998, it would have spent $69 million on the mission. Other activities are needed to support a wide variety of efforts, including the pit-manufacturing mission but are not directly attributable to pit production. These include construction-related activities at various Los Alamos nuclear facilities. For example, one activity is the construction upgrades at the Chemistry and Metallurgy Research building. DOE and Los Alamos officials stated that the costs of these activities would have been incurred whether or not Los Alamos was selected for the pit-manufacturing mission. However, unless these activities are carried out, DOE and Los Alamos officials believe that it will be difficult for them to achieve the mission’s goals. Table 2 shows the total estimated costs of these other supporting activities. The success of DOE’s pit-manufacturing mission at Los Alamos requires the use of effective cost and managerial controls for ensuring that the mission’s goals are achieved within cost and on time. An effective cost and managerial control system should have (1) an integrated cost and schedule control system, (2) independent cost estimates, and (3) periodic technical/management reviews. DOE and Los Alamos have taken actions to institute these cost and managerial controls related to the pit mission. However, some of these controls are either in the formative stages of development or are limited to addressing only certain elements of the mission instead of the entire mission. An integrated cost and schedule control system would allow managers to measure costs against stages of completion for the pit-manufacturing mission’s overall plan. For example, at any given time, the plan might identify a certain percentage of the mission’s resources that were to be spent within established limits. If variances from the plan were to exceed those limits, corrective actions could be taken. DOE and Los Alamos have in place, or are in the process of developing, (1) an integrated planning and scheduling system for the pit-manufacturing mission and (2) a separate financial management information system for monitoring costs. Los Alamos’s planning and scheduling system for the pit-manufacturing mission will eventually track, in an integrated fashion, all key planning and scheduling milestones. This system will enable managers to have timely and integrated information regarding the mission’s progress. Currently, individual managers are tracking their own progress toward important milestones but do not have integrated mission information. If their individual milestones slip, managers can take corrective actions. The integrated planning and scheduling system will enable managers to have information regarding the mission’s progress as a whole. According to a Los Alamos official, the planning and scheduling system will be completed in December 1998. Los Alamos’s financial management information system, through which mission-related costs can be monitored, provides managers with information that enables them to track expenditures and available funds. Eventually, this system will be interfaced with the pit-manufacturing mission’s integrated planning and scheduling system. However, according to a Los Alamos official, this may take several years. Independent cost estimates are important, according to DOE, because they serve as analytical tools to validate, cross-check, or analyze estimates developed by proponents of a project. DOE’s guidance states that accurate and timely cost estimates are integral to the effective and efficient management of DOE’s projects and programs. According to DOE and Los Alamos officials, independent cost estimates are required by DOE’s guidance for individual construction projects but are not required for other elements of the pit-manufacturing mission. DOE has two construction projects directly related to the pit mission and five others that indirectly support it. The Capability Maintenance and Improvements Project and the Transition Manufacturing and Safety Equipment project are directly related to the pit-manufacturing mission. The Nuclear Materials Storage Facility Renovation, the Chemistry and Metallurgy Research Building Upgrades Project, the Nuclear Materials Safeguards and Security Upgrades Project, the Nonnuclear Reconfiguration Project, and the Fire Water Loop Replacement Project indirectly support the mission as well as other activities at Los Alamos. DOE plans to eventually make an independent cost estimate for most of these construction projects. According to a DOE official, independent cost estimates have been completed for the Nuclear Materials Storage Facility Renovation, the Nonnuclear Reconfiguration Project, and the Fire Water Loop Project. Independent cost estimates have been performed for portions of the Chemistry and Metallurgy Research Building Upgrades Project. Additionally, a preliminary independent cost estimate was performed for the Capability Maintenance and Improvements Project prior to major changes in the project. DOE officials plan to complete independent cost estimates for the Nuclear Materials Safeguards and Security Upgrades Project, the revised Capability Maintenance and Improvements Project, and portions of the Transition Manufacturing and Safety Equipment project, depending upon their complexity. Because the bulk of mission-related costs are not construction costs, these other funds will not have the benefit of independent cost estimates. The mission’s elements associated with these funds include activities concerning War Reserve pit-manufacturing capability, pit-manufacturing operations, and certification. Moreover, according to DOE and Los Alamos officials, no independent cost estimate has been prepared for the mission as a whole, and none is planned. According to these officials, this effort is not planned because of the complexity of the mission and because it is difficult to identify an external party with the requisite knowledge to accomplish this task. It is important to note, however, that these types of studies have been done by DOE. In fact, DOE has developed its own independent cost-estimating capability, which is separate and distinct from DOE’s program offices, to perform such estimates. Technical/management reviews can be useful in identifying early problems that could result in cost overruns or delay the pit-manufacturing mission. DOE and Los Alamos have taken a number of actions to review particular cost and management issues. These include (1) a “Change Control Board” for the entire mission, (2) a technical advisory group on the management and technical issues related to the production of pits, (3) peer reviews by Lawrence Livermore National Laboratory on pit-certification issues, and (4) annual mission reviews. The Change Control Board consists of 14 DOE, Los Alamos, and Lawrence Livermore staff who worked on the development of the mission’s integrated plan. The Board was formed in March 1998 to act as a reviewing body for costs and management issues related to the mission. This group will meet quarterly or more regularly, as needed, to resolve cost or schedule problems. The group’s initial efforts have focused on addressing unresolved issues in the integrated plan. For example, the group has merged data from Lawrence Livermore National Laboratory and Los Alamos into the integrated plan and is updating a key document associated with the mission’s master schedule. Since July 1997, Los Alamos has been using a technical advisory group composed of nuclear experts external to Los Alamos and DOE. This group, paid by Los Alamos, provides independent advice and consultation on management and technical issues related to pit manufacturing and other related construction projects. The specific issues for assessment are selected either by the group or upon the request of Los Alamos’s management. According to the group’s chairman, Los Alamos has historically had problems with project management, and the group’s work has focused on efforts to strengthen this aspect of the pit-manufacturing mission. For example, the group has identified the need for and provided advice on the development of key planning documents. This group meets at Los Alamos on a monthly basis. Los Alamos plans specific peer reviews by Lawrence Livermore to independently assess the processes and tests related to the certification of pits. Los Alamos’s use of these peer reviews is an effort to provide an independent reviewing authority because Los Alamos is responsible for both manufacturing the pits and approving their certification. An initial planning session for this effort is scheduled for the fall of 1998. DOE and Los Alamos officials conducted a review of the pit-manufacturing mission in September 1997. The purpose of this review was to brief DOE management on the progress and status of various elements associated with the mission. As a result of the 1997 review, DOE and Los Alamos began developing an integrated plan that brings together the various elements of the mission. According to Los Alamos officials, such reviews will be held annually. DOD is responsible for implementing the U.S. nuclear deterrent strategy. According to officials from various DOD organizations, DOE’s pit-manufacturing mission is critical in supporting DOD’s needs. As a result, representatives from both Departments have conferred on and continue to discuss plans for the mission. Two important issues remain unresolved. First, officials from various DOD organizations have concerns about changes in the manufacturing processes that will be used to produce pits at Los Alamos. Second, on the basis of preliminary analyses by various DOD organizations, some representatives of these organizations are not satisfied that DOE’s planned capacity will meet the anticipated stockpile needs. DOE is responsible for ensuring that the stockpile is safe and reliable. The safety and reliability of the pits produced at Rocky Flats were proven through nuclear test detonations. Officials from various DOD organizations are concerned that Los Alamos’s pits will be fabricated by some processes that are different from those employed previously at Rocky Flats. Furthermore, pits made with these new processes will not have the benefit of being tested in a nuclear detonation to ensure that they perform as desired. As a result, officials from various DOD organizations want assurance that Los Alamos’s pits are equivalent to those produced at Rocky Flats in all engineering and physics specifications. To accomplish this, DOE and Los Alamos plan to have Lawrence Livermore conduct peer reviews. These peer reviews will focus on the certification activities related to the first type of pit to be produced. This will help verify that the necessary standards have been met. According to representatives from both Departments, they will continue to actively consult on these issues. The other unresolved issue between DOD and DOE is DOE’s planned pit-manufacturing capacity. Several efforts are currently under way within various DOD organizations to determine the stockpile’s needs and the associated requirements for pits. DOD has not established a date for providing DOE with this information. Nevertheless, on the basis of the preliminary analyses performed by various DOD organizations, many DOD officials believe that DOE’s capacity plans will not meet their stockpile needs. According to these officials, their requirements will be higher than the production capacity planned at Los Alamos. As a result, these officials do not support DOE’s stated goal of developing a contingency plan for a large-scale manufacturing capacity sometime in the future. Rather, these officials told us that they want DOE to establish a large-scale manufacturing capacity as part of its current efforts. However, DOD officials said that they will be unable to give detailed pit-manufacturing requirements until the lifetime of pits is specified more clearly through DOE’s ongoing research on how long a pit can be expected to function after its initial manufacture. According to DOE officials, they believe that the planned capacity is sufficient to support the current needs of the nuclear weapons stockpile. Furthermore, no requirement has been established for a larger manufacturing capacity beyond that which is planned for Los Alamos. DOE officials told us that they are discussing capacity issues with DOD and are seeking to have joint agreement on the required capacity. However, no date has been established for reaching an agreement on this issue. DOE plans to spend over $1.1 billion through fiscal year 2007 to establish a 20-pits-per-year capacity. This capacity may be expanded to 50 pits per year sometime after fiscal year 2007. Various DOD organizations have performed preliminary analyses of the capacity needed to support the stockpile. These analyses indicate that neither the 20-pits-per-year capacity nor the 50-pits-per-year capacity will be sufficient to meet the needs of the stockpile. As a result, officials from organizations within DOD oppose DOE’s plan for not developing a large-scale manufacturing capacity now but rather planning for it as a future contingency. Once the various DOD organizations have completed their stockpile capacity analyses, DOD can then let DOE know its position on the needs of the nuclear stockpile. DOE will then be faced with the challenge of deciding how it should respond. A decision to pursue a production capacity larger than that planned by DOE at Los Alamos will be a major undertaking. Because of the cost and critical nature of the pit-manufacturing mission, DOE needs to ensure that effective cost and managerial controls are in place and operating. DOE and Los Alamos have not fully developed some of the cost and managerial control measures that could help keep them within budget and on schedule. An integrated cost and schedule control system is not in place even though millions of dollars have been spent on the mission. Furthermore, only a small portion of the costs associated with the mission has had the benefit of independent cost estimates. Without fully developed effective cost and managerial controls, the mission could be prone to cost overruns and delays. In order for DOE to have the necessary information for making pit-production capacity decisions, we recommend that the Secretary of Defense do the following: Provide DOE with DOD’s views on the pit-manufacturing capacity needed to maintain the stockpile. This should be done so that DOE can use this information as part of its reevaluation of the stockpile’s long-term capacity needs. While we understand that DOD cannot yet provide detailed requirements, DOE can be provided with the findings of the preliminary analyses of various DOD organizations. In order to ensure that the pit-manufacturing mission at Los Alamos supports the nuclear stockpile in a cost-effective and timely manner, we recommend that the Secretary of Energy take the following measures: Reevaluate existing plans for the pit-manufacturing mission in light of the issues raised by DOD officials regarding the capacity planned by DOE. Expedite the development of the integrated cost and schedule control system at Los Alamos. This needs to be done as soon as possible to help ensure that the mission is achieved within cost and on time. Conduct independent cost estimates for the entire pit-manufacturing mission. This can be done either for the mission as a whole or for those individual mission elements that have not had independent estimates. We provided DOE and DOD with a draft of this report for review and comment. DOE concurred with all but one recommendation in the report. That recommendation was that the Secretary of Energy “establish a separate line item budget category for the pit-manufacturing mission at Los Alamos.” In its comments, DOE emphasized that its current budgeting and accounting practices related to pit production are consistent with appropriation guidelines, are consistent with budgeting and accounting standards, and are responsive to the Government Performance and Results Act. DOE also stated that it plans to keep congressional staff informed of the mission’s progress through quarterly updates. These updates will be initiated following the approval of the budget for fiscal year 1999. In a subsequent discussion, DOE’s Laboratory Team Leader in the Office of Site Operation, said that these updates will include information on the mission’s cost and milestones. He noted that the cost information provided could be as detailed as congressional staff require. Our recommendation was aimed at getting DOE to identify the total estimated costs associated with the pit-manufacturing mission in a clear and comprehensive manner to the Congress. The clear identification of total estimated costs is important because the pit-manufacturing mission is critical to national security interests and represents a significant financial investment for the future. Since DOE prepared a cost estimate covering the total pit mission during our audit, a baseline has been established. We believe that DOE’s planned quarterly updates will be an appropriate means of updating this cost information for the Congress. As a result, we have deleted this recommendation from our final report. DOE also provided several clarifications to the report, and the report has been revised where appropriate. DOE’s comments are provided in appendix II. DOD agreed with the information presented in our draft report and provided us with technical clarifications, which we incorporated as appropriate. DOD did not agree with our recommendation that the Secretary of Defense clearly articulate DOD’s views on the pit-manufacturing capacity needed to maintain the stockpile. DOD was concerned that the aging of pits was not clearly identified in our report as a driving force of pit-production requirements. DOD said that it could not give detailed pit-manufacturing requirements until the lifetime of pits is specified more clearly by DOE. We have modified our report and the recommendation to recognize that DOD believes that it cannot provide DOE with detailed pit-manufacturing capacity requirements until more is known about the aging of pits. However, we believe that there are merits in DOD’s sharing of the information from the preliminary analyses of various DOD organizations with DOE. This information would be useful for DOE in its long-term planning efforts, especially those related to contingency planning. DOD’s comments are included in appendix III. To address our objectives, we interviewed officials and obtained documents from DOD, DOE, Los Alamos, and the Nuclear Weapons Council. We did not independently verify the reliability of the estimated cost data that DOE provided us with. According to DOE, these data represent its best estimates of future mission costs but are likely to change as the mission progresses and should not be viewed as final. Our scope and methodology are discussed in detail in appendix I. We performed our review from October 1997 through August 1998 in accordance with generally accepted government auditing standards. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of the report to the Secretary of Energy; the Secretary of Defense; and the Director, Office of Management and Budget; and appropriate congressional committees. We will also make copies available to others on request. To obtain information about the Department of Energy’s (DOE) plans and schedules for reestablishing the manufacturing of pits, we gathered and analyzed various documents, including DOE’s (1) Record of Decision for the Stockpile Stewardship and Management Programmatic Environmental Impact Statement, (2) guidance for stockpile management and the pit-manufacturing mission, and (3) the draft Integrated Plan for pit manufacturing and certification. We discussed with DOE and Los Alamos National Laboratory officials the basis for the mission’s plans and schedules. These officials also discussed why changes were made to these plans and schedules in December 1997. DOE and Los Alamos officials discussed with us their progress in meeting milestones, which we compared with the established major milestones for the mission. In order to have a better understanding of the efforts taking place at Los Alamos, we also met with DOE and contractor employees at Rocky Flats who were formerly involved with the production of pits at that site. These individuals discussed the pit production issues and challenges that they faced at Rocky Flats. Cost information associated with the pit-manufacturing mission was obtained primarily from DOE’s Albuquerque Operations Office. This information was compiled by DOE with the assistance of Los Alamos officials. These costs were only recently prepared by DOE and Los Alamos. According to a DOE official, this effort took several months partly because of changes in DOE’s mission plans. These costs were provided for us in current-year dollars. As such, we did not adjust them to constant-year dollars. Additionally, we did not independently verify the accuracy of the cost data. These data were in draft form during our review and not considered approved by DOE. We interviewed both DOE and Los Alamos officials regarding the methodology that was used to develop the cost data. In addition, we also discussed with DOE and Los Alamos officials cost and managerial controls related to the mission and reviewed pertinent documents on this subject. To understand unresolved issues between the Department of Defense (DOD) and DOE regarding the manufacturing of pits, we spoke with representatives from DOD, DOE, and Los Alamos. DOD officials with whom we spoke included representatives from the Joint Chiefs of Staff, Nuclear and Chemical and Biological Defense Programs, Army, Air Force, Navy, and Strategic Command. We also met with a representative of the Nuclear Weapons Council. Our work was conducted in Golden, Colorado; Germantown, Maryland; Albuquerque, New Mexico; Los Alamos, New Mexico; Alexandria, Virginia; and Washington, D.C., from October 1997 through August 1998 in accordance with generally accepted government auditing standards. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on the Department of Energy's (DOE) efforts to manufacture war reserve nuclear weapon triggers, or pits, at its Los Alamos National Laboratory, focusing on: (1) DOE's plans and schedules for reestablishing the manufacturing of pits at Los Alamos; (2) the costs associated with these efforts; and (3) unresolved issues regarding the manufacturing of pits between the Department of Defense (DOD) and DOE. GAO noted that: (1) DOE's plans for reestablishing the production of pits at Los Alamos National Laboratory have changed and are still evolving; (2) DOE expects to have only a limited capacity online by fiscal year (FY) 2007; (3) specifically, DOE plans to reestablish its capability to produce war reserve pits for one weapons system by FY 2001 and plans to have an interim capacity of 20 pits per year online by FY 2007; (4) this planned capacity differs from the goal that DOE established in FY 1996 to produce up to 50 pits per year by fiscal 2005; (5) DOE has not decided what the final production capacity at Los Alamos will be; (6) DOE has done little to develop a contingency plan for the large-scale manufacturing of pits (150-500 pits per year); (7) large-scale manufacturing would be necessary if a systemwide problem were identified with pits in the stockpile; (8) the current estimated costs for establishing and operating DOE's pit-manufacturing mission total over $1.1 billion from FY 1996 through fiscal 2007; (9) this estimate does not include over $490 million in costs for other activities that are not directly attributable to the mission but are needed to support a wide variety of defense-related activities; (10) GAO also noted that some key cost and managerial controls related to DOE's pit-manufacturing mission are either in the formative stages of development or do not cover the mission in its entirety; (11) DOD and DOE have discussed, but not resolved, important issues regarding: (a) changes in the manufacturing processes that will be used to produce pits at Los Alamos; and (b) the pit-manufacturing capacity planned by DOE; (12) officials from various DOD organizations have expressed concerns about the equivalence of Los Alamo's pits to the pits previously manufactured at Rocky Flats because some manufacturing processes will be new at Los Alamos and are different from those previously used by Rocky Flats; (13) also, officials from various DOD organizations are not satisfied that DOE's current or future capacity plans will be sufficient to meet the stockpile's needs; (14) various DOD organizations have performed preliminary analyses of the capacity needed to support the stockpile; (15) on the basis of these analyses, some of these officials believe that the stockpile's needs exceed the 20-pits-per-year capacity that DOE may establish in the future; (16) however, DOD officials said that they will be unable to give detailed pit-manufacturing requirements until the lifetime of pits is more clearly specified by DOE; and (17) DOE is currently studying this issue. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Illicit activities, such as drug trafficking, robbery, fraud, or racketeering, produce cash. Money laundering is the process used to transform the monetary proceeds derived from such criminal activities into funds and assets that appear to have come from legitimate sources. Money laundering generally occurs in three stages. As shown in figure 1, in the placement stage, cash is converted into monetary instruments, such as money orders or traveler’s checks, or deposited into financial institution accounts. In the layering stage, these funds are transferred or moved into other accounts or other financial institutions to further obscure their illicit origin. In the integration stage, the funds are used to purchase assets in the legitimate economy or to fund further activities. There is no way to determine the actual amount of money that is being laundered in general, let alone through a single industry such as the securities industry. However, experts have estimated that money laundering in the global financial system is between 2 to 5 percent of the world’s gross domestic product. Estimates of the amount of money laundered in the United States have been as high as $100 billion. Money launderers can target any of the various types of businesses that participate in the U.S. securities industry. Broker-dealers, for instance, provide a variety of products and services to retail (usually individual) and institutional investors—buying and selling stocks, bonds, and mutual fund shares. As shown in figure 2, two types of broker-dealers—introducing brokers and clearing brokers—perform different roles that can affect the extent of their anti-money laundering responsibilities. Provide brokerage services and offer financial advice to customers. Some broker-dealers regulated as clearing firms may clear only their own firms’ transactions and not those of other firms. These firms are known as a self-clearing firms. Mutual funds are another major participant in the securities markets. Mutual funds are investment companies that pool the money of many investors and use it to purchase diversified portfolios of securities. The administrator of a mutual fund, which in most cases is the fund’s investment adviser, contracts with other entities to provide the various services needed to operate the fund. Figure 3 shows some of these entities, the services they perform, and some of the institutions that usually perform them. Depending on the extent to which these entities interact with the fund’s customers or accept customer payments, their responsibilities for conducting anti-money laundering activities may also vary. to perform other services. payments. broker-dealers, or banks) (Broker-dealers, banks, or nonfinancial firms) Sells fund shares, including customer payments. for fund securities and cash. (Broker-dealer, bank, or (Banks) other financial institution) SEC has primary responsibility for overseeing the various participants in the U.S. securities industry, including broker-dealer and mutual fund firms. It promulgates regulations, performs examinations, and initiates enforcement actions against alleged violators of the securities laws. Before conducting business with the public, broker-dealers are required to register with SEC and must also join and submit to oversight by an SRO. These SROs, which include NASDR and NYSE, oversee members’ compliance with their own rules, rules enacted by SEC, and the securities laws. Federal regulators of depository institutions have oversight responsibilities for banks, thrifts, and their holding companies. Prior to the passage of GLBA in 1999, banks conducting securities activities directly were subject to regulation and supervision by their respective banking regulators rather than SEC. After GLBA is fully implemented, banks and thrifts conducting certain securities activities will have to do so in entities registered as broker-dealers subject to oversight by SEC and securities industry SROs. The role of the depository institution regulators, with regard to the securities activities of the entities that they regulate, now involves sharing information with SEC, although under certain circumstances these regulators may conduct examinations of the subsidiaries. Under current legislation governing money laundering, the Secretary of the Treasury has a variety of responsibilities. These include issuing anti-money laundering regulations applicable to financial institutions and other organizations, such as banks, broker-dealers, casinos, and money transmitters. Within Treasury, the authority to issue and administer these regulations has been delegated to the Director of the Financial Crimes Enforcement Network (FinCEN). FinCEN was established in 1990 to support law enforcement agencies by collecting, analyzing, and coordinating financial intelligence information to combat money laundering. Although the extent to which broker-dealers and mutual funds are being used to launder money is not known, law enforcement officials were concerned that the securities industry would increasingly be a target for potential money launderers. All financial sectors, and even commercial businesses, could be targeted by money launderers. The securities industry has characteristics similar to other financial sectors but also has some significant differences. Criminals seeking to convert their illegal proceeds to legitimate assets have targeted banks, which take cash for deposit, as a means to initially introduce illicit income into the financial system. Law enforcement and securities industry officials said that because securities activities generally do not involve cash, broker-dealers and mutual funds are not as vulnerable as banks during the initial placement stage of the money laundering process. However, some structuring schemes used in the placement stage involve monetary instruments such as money orders, and money launderers could attempt to use broker-dealers and mutual funds that accept these forms of payment. According to law enforcement officials, money launderers would more likely attempt to use brokerage or mutual fund accounts in the layering and integration stages of money laundering, rather than for the placement stage. Similar to their use of banks, money launderers could use brokerage or mutual fund accounts to layer their funds by, for example, sending and receiving money and wiring it quickly through several accounts and multiple institutions. The securities industry could also be targeted for integrating illicit income into legitimate assets. In one case, illicit proceeds from food stamp fraud were used to open brokerage accounts and invest in stocks through an ongoing stream of deposits that ranged from less than $1,000 to almost $10,000. Law enforcement officials were concerned that various characteristics of the securities industry and securities transactions were particularly attractive to money launderers. For example, the U.S. national money laundering strategy for 2000, issued by the Secretary of the Treasury and the U.S. Attorney General, notes that the general nature of the securities industry provides criminals with opportunities to move and thus obscure funds. The report suggests that money launderers may target the industry because funds can be efficiently transferred among accounts and to other financial institutions, both domestically and internationally. For example, like some banking organizations, several large broker-dealers have offices located throughout the United States and in many foreign countries. Some law enforcement officials noted that wire transfers, specifically those that involve offshore accounts, can be particularly vulnerable to money laundering. The national strategy report also suggests that money launderers may be attracted to the industry because of the high degree of liquidity in securities products, which can be readily bought and sold. Some law enforcement officials pointed to the high volume, large-dollar amounts, and potentially profitable nature of securities transactions. On a typical day, for example, an estimated 3 billion shares of stock worth over $85 billion are traded on the main U.S. markets—a dramatic increase from about $20 billion in 1995. (Appendix III provides additional information on the size and growth of the U.S. securities industry.) Officials noted that the rapid growth of the securities markets and increasing popularity of investing in stocks and mutual funds may also have raised the industry’s profile with money launderers, who are becoming increasingly sophisticated and are attempting to find as many avenues as possible to launder funds. Law enforcement and securities industry officials also identified several specific financial activities that securities firms conduct and that they viewed to be more at risk for potential money laundering. For example, law enforcement officials expressed concern that on-line brokerage accounts were vulnerable to use by money launderers, and such accounts have grown substantially in the last few years, jumping from an estimated 7 million in 1998 to almost 20 million in 2000. On-line brokerage services provide little opportunity for face-to-face contact with customers or for verifying the identity of those logging into accounts—a safeguard that is important to anti-fraud as well as anti-money laundering initiatives. Although the industry already conducts much of its customer contacts solely by telephone, securities regulators and industry officials acknowledged that on-line activities pose particular challenges from a money laundering perspective. Law enforcement officials also noted that some large broker-dealers are offering private banking services (broadly defined as financial and related services provided to wealthy clients) that are deemed vulnerable to money laundering. These services generally attempt to offer considerable confidentiality as part of the client relationship, routinely involve large-dollar transactions, and sometimes offer the use of offshore accounts. Some law enforcement officials maintained that the securities industry lacks adequate anti-money laundering requirements and thus represents a weak link in the U.S. regulatory regime that can be exploited by money launderers in their search for new ways to hide their funds. These officials described the securities sector as a “money laundering loophole” within the financial services industry that should be closed, particularly as other financial sectors are being required to improve their defenses against money laundering. For example, Treasury issued rules for banks in 1996 and for money services businesses in 2000 requiring these firms to report suspicious activities, including potential money laundering. However, similar requirements do not yet apply to all broker-dealers and mutual fund firms, and law enforcement officials saw this fact as a reason that criminals may seek to use such firms to facilitate money laundering. Some law enforcement officials also suggested that as financial institutions continue to merge in response to GLBA, the need for consistent and adequate anti- money laundering requirements in all financial sectors is becoming even more pronounced. Securities industry officials acknowledged that money launderers could potentially target their industry. SEC staff have noted that the large volume of money generated by illegal activities creates a risk for broker-dealers as well as other financial institutions. In a May 2001 speech, an SEC official stated that firms in the securities industry face great risks if they allow themselves to be used for money laundering. The official noted that trillions of dollars flow through the industry each year, and criminal activity within the industry could taint important U.S. capital markets. Despite concerns regarding potential money laundering in the securities industry, the extent to which money launderers are actually using broker- dealers and mutual fund firms is not known. According to law enforcement officials, no organization currently collects information in a way that lends itself to readily identifying cases in which funds generated by illegal activity outside of the securities industry were laundered through brokerage or mutual fund accounts. Legal searches of cases primarily identify money laundering cases in which broker-dealers or others committed securities law violations, such as insider trading, market manipulation, or the sale of fraudulent securities, and then laundered the proceeds from their illegal activities through banks or other financial institutions. Law enforcement and securities industry officials acknowledged that a limited number of cases involving money laundering through broker-dealer or mutual fund accounts could be readily identified to date. At our request, the Internal Revenue Service and the Executive Office for U.S. Attorneys collected information from some of their field staff that identified about 15 criminal or civil forfeiture cases since 1997 involving money laundering through brokerage and mutual fund accounts. The laundered funds in these cases came from a number of activities, including drug trafficking, illegal gambling, and food stamp fraud, and the estimated amounts of laundered funds varied widely, ranging from $25,000 to $25 million per case. In contrast, during 1999 alone, the United States reported having 996 money laundering convictions, most of which involved funds that were laundered through banks or other means. SEC and industry officials also pointed out that the industry has not had a history of money laundering cases. Law enforcement officials suggested that several factors could have contributed to the limited number of known cases involving money laundered through brokerage or mutual fund accounts. These factors include the difficulty of detecting money laundering at the layering and integration stages and the lack of adequate systems to detect money laundering activities in the securities industry. Specifically, they noted that the absence of a SAR rule may be limiting the identification of money laundering through broker-dealer and mutual fund accounts. A few officials also explained that some investigators faced with time constraints and multiple leads may choose to trace illegal funds through bank rather than brokerage or mutual fund accounts because banks are subject to SAR rules and thus are expected to have SAR-related procedures and documentation needed for investigations. Law enforcement officials anticipated that more cases may surface in the future as criminals continue to search for new ways to launder their funds and turn to the securities industry. One U.S. attorney stated that although, historically, money laundering through the securities industry has not been an apparent problem, some pending investigations involving the movement of Russian funds through various types of financial accounts, including brokerage accounts, indicate that activity in the area may be increasing. Other law enforcement agencies were also attempting to identify and develop additional cases in which brokerage and mutual funds accounts were used to launder money. For example, staff at one agency was in the process of analyzing whether money orders made payable to broker- dealers, mutual funds, and other financial institutions were being used for money laundering. Broker-dealers and the firms that receive and process customer payments on behalf of mutual fund groups (hereinafter referred to as mutual fund service providers) can be held criminally liable if they are found to be involved in money laundering. They are also subject to certain reporting and recordkeeping requirements. However, unless a broker-dealer is a subsidiary of a depository institution or of a depository institution’s holding company, or a mutual fund service provider is itself a depository institution (as are some transfer agents), it is not subject to regulations requiring it to file SARs for transaction that could involve money laundering. SEC and the SROs monitor the industry’s compliance with the currency and related reporting and recordkeeping requirements during examinations and, according to SEC officials, are planning to conduct more extensive reviews of firms’ anti-money laundering efforts starting in the fall of 2001. Broker-dealers and mutual fund service providers that accept customer funds are subject to the Money Laundering Control Act of 1986, which is a statute that applies broadly to all U.S. citizens. This act makes knowingly engaging in financial transactions that involve profits from certain illegal activities a criminal offense. As a result, individuals and companies conducting financial transactions on behalf of customers can be prosecuted if they are found to have conducted transactions involving money from illegal activities. Broker-dealers and mutual fund service providers can also be prosecuted if they knew or were willfully blind to the fact that a transaction involved illegal profits. Penalties under the Money Laundering Control Act include imprisonment, fines, and forfeiture. Like other financial institutions, broker-dealers and those mutual fund service providers that accept customer funds are required to comply with various BSA or similar reporting and recordkeeping requirements. Such requirements are designed to be useful in tax, regulatory, or criminal investigations, including those relating to money laundering. As shown in table 1, firms subject to these requirements are to identify and report currency transactions exceeding $10,000 with FinCEN, file reports on foreign bank and financial institution accounts with FinCEN, and report the transportation of currency or monetary instruments into or out of the United States with the U.S. Customs Service. In addition to imposing reporting requirements, the BSA requires broker- dealers and mutual fund service providers to maintain certain records. For example, broker-dealers and other financial institutions conducting transmittals of funds of $3,000 or more (including wire transfers) are required to obtain and keep information on both the sender and recipient and to record such information on the transmittal order. Broker-dealers also are required to have compliance programs in place for ensuring adherence to the federal securities laws, including the applicable BSA requirements. Regulations under the BSA also require that banks report suspicious transactions of $5,000 or more relating to possible violations of law, but these requirements do not currently apply to all broker-dealers and mutual fund service providers. Amendments to the BSA adopted in 1992 gave Treasury the authority to require financial institutions to report any suspicious transaction relevant to a possible violation of a law. In 1996, Treasury issued a rule requiring banks to report suspicious activities involving possible money laundering to FinCEN using a SAR form. In 1996, the depository institution regulators promulgated regulations that require broker-dealer subsidiaries of bank holding companies, national banks, and federal thrifts to file SARs if the subsidiaries identify potential money laundering or violations of the BSA involving transactions of $5,000 or more. Until Treasury promulgates SAR rules for broker-dealers, only broker-dealers that are subsidiaries of depository institutions or of their holding companies are subject to SAR requirements. Depository institution regulators have also issued regulations that require banks to have BSA compliance programs in place, including (1) developing internal policies, procedures, and controls; (2) independently testing for compliance; (3) designating an individual responsible for coordinating and monitoring compliance; and (4) conducting training for personnel. Treasury is engaged in renewed efforts to develop a SAR rule for the securities industry and anticipates that a proposed rule will be issued for public comment before the end of 2001. Working with SEC, Treasury initially attempted to develop a SAR rule for the securities industry in 1997. Treasury officials explained that this effort was set aside so that the Department could focus first on cash-intensive businesses, such as the money services businesses and casinos, that are viewed as more vulnerable to money laundering at the placement stage. During 2001, Treasury resumed working with SEC to develop a SAR rule for the securities industry. Key issues being discussed include determining the appropriate threshold for reporting suspicious activities, ensuring that the SAR rule will not interfere with existing procedures for reporting securities law violations that apply to broker-dealers, and providing for compliance program requirements. One question being debated is whether the $5,000 threshold for reporting suspicious activities that applies to banks should also apply to the securities industry. Securities industry and regulatory officials explained that this reporting threshold reflects the cash-intensive nature of the banking industry and its vulnerability to money laundering at the placement stage and, as such, should not be applied to securities firms. They also noted that the banking threshold does not reflect the typically high-dollar amount of securities transactions. Instead, these officials have proposed thresholds ranging from $25,000 to $100,000. Officials from a few large firms stated that they currently use thresholds ranging from $250,000 to $1 million in their proprietary systems for monitoring suspicious transactions. They explained that $5,000 transactions would be too difficult to identify in the accounts of several million customers and too burdensome for processing and review purposes. In responding to our survey, five broker-dealer subsidiaries of bank holding companies, which are required by bank regulators to file SARs, suggested that the threshold for the securities SAR rule needed to be raised. A few broker-dealer subsidiaries said that the thresholds should be the same for both the banking and securities industry rules, and the remaining 18 respondents did not offer any comment on tailoring the SAR threshold to the securities industry. Results from our surveys did suggest that the average securities transaction tends to be much larger than $5,000. For example, broker-dealers reported that the average size of an individual transaction processed for retail customers was about $22,000, although the size of these transactions ranged anywhere from $200 to $150,000. Appendix V provides additional survey information on the size of average transaction amounts. Securities industry representatives also pointed out that a low SAR threshold could result in an inordinate number of SAR filings from the industry, undermining the ability of law enforcement agencies to use the reports effectively. Federal Reserve officials supported a higher SAR threshold for the securities industry, in part because they thought it could help justify a higher reporting threshold for the banking industry as well. Finally, some law enforcement officials also viewed the reporting threshold as too low for the securities industry but did not propose an alternative amount. Although they acknowledged that the securities industry appears to be engaged in larger dollar transactions than other types of financial institutions, a few officials expressed concerns about having different reporting thresholds for the banking and securities financial sectors. Another issue being discussed is the scope of suspicious activities that should be reported to FinCEN on the SAR form. Financial regulators, industry, and law enforcement officials agree that any rule requiring the securities industry to report suspicious activities involving money laundering should not replace existing procedures that require broker- dealers to report suspected violations of securities laws. Currently broker- dealers are to report possible securities law violations to SEC, SROs, or a U.S. attorney’s office. In turn, SEC and the SROs are to refer criminal money laundering offenses that are reported along with suspected securities law violations to the appropriate U.S. attorney’s office. To minimize any potential confusion on the part of the industry, officials emphasized that the language of the SAR rule should be written to ensure that firms understand that they are to continue to report potential securities violations to the appropriate securities regulators. Both securities industry and law enforcement officials recognize the value of requiring compliance programs for reporting suspicious activities and are discussing whether the SAR rule is the most appropriate mechanism for imposing such requirements. Law enforcement officials said that industry participants cannot fully implement a suspicious activity reporting regime unless they are also required to set up systems to monitor their customers’ activities to prevent and detect transactions involving money laundering. In addition, securities industry officials said that the SAR rule should provide that broker-dealers with systems for reasonably detecting suspicious transactions, appropriate procedures for filing SARs, and no basis for believing that these procedures are not being followed, have a defense against being cited for violating the SAR reporting requirement. Such a provision would be an effective incentive for broker-dealers to develop and maintain up-to-date programs designed to monitor and report suspicious activities that may involve money laundering. In addition to issues relating to the SAR rule itself, some unique characteristics of the securities industry, including the variety of business structures and processes, product lines, and client bases among broker- dealers and mutual funds, will make implementing the rule more challenging. Not all firms in the industry perform similar activities and thus may have to work with other firms to fulfill their SAR-related responsibilities. For example, determining whether particular transactions are suspicious may require information from an introducing broker on a customer’s identity and business activities or investment patterns and information from a clearing broker on the customer’s payment and transaction histories. Regulators and others have also noted that addressing anti-money laundering considerations will be more challenging within the securities industry because firms may not collect the same type of information about customers as banks. Broker-dealers are expected to collect enough information about their customers to ensure that any recommended investments are suitable. However, for some accounts this may not include all information, such as the customer’s source of the wealth or income, that can be important for assessing whether this customer’s activities are suspicious. Further, with the securities industry, there is a greater need to focus on the layering and integration stages of money laundering. SEC and the securities industry SROs oversee broker-dealers’ compliance with BSA reporting and recordkeeping requirements involving currency and other related transactions. After Treasury granted SEC the authority to examine broker-dealers for compliance with these BSA requirements, SEC adopted Rule 17a-8 under the Exchange Act, incorporating these requirements into its own rules. As a result, SEC and the SROs have the authority to both examine broker-dealers for compliance with these requirements and bring action against firms that violate them. Along with SEC, the SROs are to perform examinations of broker-dealers, including reviews to assess compliance with anti-money laundering reporting and recordkeeping requirements. These examinations do not routinely include assessing compliance with BSA SAR requirements that do not yet apply to the industry. During 2000, NASDR reported that it conducted 1,808 broker-dealer examinations, and NYSE reported that it conducted 319 examinations. Both SROs found that some broker-dealers had deficiencies in supervisory procedures pertaining to the currency reporting and recordkeeping requirements under SEC Rule 17a-8. Although most broker-dealers are not subject to SAR requirements, National Association of Securities Dealers (NASD) and NYSE representatives noted that they have reviewed broker-dealers’ procedures relating to suspicious activities. In 1989, NASD and NYSE issued guidance advising their members that reporting suspicious activities could prevent firms from being prosecuted under the Money Laundering Control Act. In its issuance, NASD specifically warned its members that failure to report suspicious transactions could be construed as aiding and abetting violations of the act and could subject the broker to civil and criminal charges. In its guidance, NYSE cautioned its members to establish procedures to detect transactions by money launderers and others who seek to hide profits obtained from illegal activity. In conducting reviews of their members’ procedures relative to such guidance, these SROs cited a few firms for deficiencies such as failing to maintain written supervisory procedures to identify and record suspicious transactions. Although the SROs conduct most examinations of broker-dealers, SEC staff also perform them and examinations of certain transfer agents. For instance, SEC staff conduct oversight examinations of broker-dealers that are designed to test both the firms’ compliance with securities laws and SEC rules (such as SEC Rule 17a-8) and the quality of SRO examinations. SEC staff also perform “cause examinations” that are initiated in response to special concerns related to a firm. These examinations can sometimes cover compliance with Rule 17a-8, even though BSA compliance may not have been the initial reason for the examination. During 2000, SEC completed 422 oversight examinations and 283 cause examinations but found no violations of anti-money laundering requirements that had not already been identified by the SROs. SEC also conducts examinations of mutual funds and their transfer agents that address some money laundering issues. Among the firms that act as transfer agents for mutual funds are broker-dealers, banks, and nonfinancial firms that provide other services to mutual funds. Although Rule 17a-8 does not apply to transfer agents that are not broker-dealers, SEC staff explained that the examiners also inquire about these firms’ policies for detecting transactions that may involve money laundering. Most mutual fund shares, however, are sold by broker-dealers or other financial intermediaries that have primary responsibility for complying with the BSA or other currency reporting requirements (such as those contained in the Internal Revenue Code). Recognizing the need to strengthen the securities industry’s efforts to combat money laundering, and anticipating a SAR rule for the industry, SEC and the SROs are in the process of developing a “refocused” approach to anti-money laundering examinations. According to SEC officials, this enhanced approach will result in a broader review of securities firms than the current approach, which focuses on compliance with Rule 17a-8. The new approach is intended to assess firms’ overall anti-money laundering strategies to determine whether they include policies, procedures, and internal control systems for monitoring suspicious activities. SEC officials anticipated that the expanded procedures would be used during examinations starting in the fall of 2001. They also indicated that once Treasury adopts a SAR rule for the securities industry, SEC and the SROs plan to develop additional examination procedures to review firms for compliance with this rule. In responding to our survey, broker-dealers and direct-marketed mutual fund groups reported taking steps to combat money laundering that go beyond the BSA requirements applicable to the securities industry at large. Many firms have gone beyond currency reporting requirements by restricting the acceptance of cash and other forms of payment that may be used to launder money in the placement stage. Survey results also showed that some broker-dealers and direct-marketed mutual fund groups had implemented voluntary anti-money laundering measures designed to identify and report suspicious activities that may involve money laundering, but most have yet to take such steps. Clearing brokers were more actively engaged in such voluntary anti-money laundering efforts than introducing brokers. In some cases, introducing brokers relied on their clearing brokers to conduct anti-money laundering activities for them, but not all clearing firms performed such activities or subjected introducing broker transactions to such measures. The largest broker-dealers and direct-marketed mutual fund groups, which represent the majority of assets and accounts in the securities industry, were reportedly much more actively engaged in such voluntary anti-money laundering efforts than small and medium-sized firms, although these represent the majority of industry participants. A vast majority of the broker-dealers and direct-marketed mutual fund groups surveyed reported having policies that prohibit the acceptance of cash. By prohibiting cash transactions, firms reduce their vulnerability to money laundering at the placement stage and the number of instances in which they must report certain currency transactions. Our survey showed that 95 percent of a projected 2,979 broker-dealers among our survey population and 92 percent of the 310 mutual fund groups never accept cash in the normal course of business. The remaining firms accept cash only as an exception. For example, these firms might accept small amounts (less than $1,000) or conduct cash transactions approved by a legal or compliance department. Industry officials explained that most securities firms and mutual funds are not set up to handle cash. Conducting securities business in cash is generally viewed as too burdensome, and many firms have chosen not to develop the needed infrastructure, including policies and procedures, storage facilities, and internal controls. Furthermore, industry officials note that prohibiting the use of cash is a prudent business practice that helps to reduce risks, other than money laundering, commonly associated with handling cash, including theft and embezzlement. Although most broker-dealers and direct-marketed mutual fund groups have reduced their vulnerability to money laundering that involves cash transactions, many may still be vulnerable to money laundering using other forms of payment or deposit, such as traveler’s checks, money orders, and cashier’s checks. As shown in figure 4, over 55 percent of direct-marketed mutual fund groups reported always accepting money orders. According to law enforcement officials, such forms of payment or deposit can be used as part of structuring schemes in which cash is converted into monetary instruments and deposited in increments of less than the $10,000 reporting threshold. In addition, a large portion of mutual fund groups and broker- dealers also reported accepting cashier’s checks, which can also be used in money laundering schemes. A securities industry official pointed out that cashier’s checks are a common form of payment that firms tend to monitor rather than restrict for money laundering purposes. Personal checks are the most widely accepted form of payment but, according to industry officials, are viewed with less concern since they can usually be traced to accounts at depository institutions that have their own anti-money laundering requirements. Industry representatives also pointed out that although the survey responses reflect the proportion of firms that accept certain forms of payment, these figures do not likely correspond with the extent to which the cited forms of payment are actually used to deposit funds into broker- dealer or mutual fund accounts. For example, officials from a mutual fund industry association said that considerable amounts of money are deposited into mutual funds through electronic fund transfers from bank accounts or through payroll deposits. Although not subject to SAR requirements, some broker-dealers and direct- marketed mutual fund groups reported having implemented anti-money laundering measures designed to identify and report suspicious activities. According to our survey, 17 percent of broker-dealers, or an estimated 513 of 3,015 firms, reported implementing anti-money laundering measures that go beyond BSA provisions for the securities industry at large. In our survey, we asked firms to identify the type of voluntary anti-money laundering measures, if any, they have implemented. We divided these types of measures into four broad categories: written policies and procedures, such as those requiring staff to learn more about customers and the nature of the customers’ businesses; internal controls, including supervisory reviews to ensure that anti- money laundering policies and procedures are being followed; tools and processes, such as an automated transaction monitoring program to facilitate the detection of potential money laundering; and formal training programs for staff, such as those that provide guidance on how to identify suspicious activities that may involve money laundering. These categories were used to determine the general nature of industry efforts and do not represent a comprehensive list of anti-money laundering efforts. which firms that reported implementing anti-money laundering measures were actually adhering to them. In addition, the effectiveness of these measures at any firm would depend on various factors, including the level of a firm’s management commitment to detecting and preventing money laundering and the degree to which the employees responsible for following anti-money laundering policies and procedures are being supervised and held accountable. Although 17 percent of broker-dealers overall reported implementing at least one voluntary anti-money laundering measure, broker-dealers that clear trades for themselves and other firms reported being more active in the area. According to our survey analysis, 15 percent of introducing brokers and 63 percent of clearing brokers reported implementing voluntary anti-money laundering measures. As shown in figure 5, the extent to which introducing brokers reported implementing the various voluntary measures identified in our survey ranged from 2 to 10 percent. The extent to which clearing brokers reported implementing the various voluntary measures identified in our survey ranged from 5 to 53 percent. Our survey results also showed that the transactions processed by 40 percent of direct-marketed mutual fund groups were subject to some type of voluntary anti-money laundering measures. Over 30 percent of these groups reported that they or their transfer agents had put in place policies and many of the tools and processes for identifying and monitoring suspicious activities (fig. 6). The extent to which firms had implemented multiple anti-money laundering measures varied. For example, for broker-dealers that reported having implemented voluntary anti-money laundering measures, almost 20 percent indicated they had three or fewer of these measures in place. Almost 30 percent of these broker-dealers reported having implemented more than 10 measures. Even when firms reported implementing the same measures, the scope of their efforts differed. For example, officials at one firm explained that its transaction monitoring system, although still in the process of being implemented, was specifically designed for anti-money laundering purposes and focused on the overall financial activities of its customers, including deposits, wire transfers, and transactions involving cash equivalents. This firm’s system will eventually use customer profiling techniques to identify unusual spikes in account activity and will have the ability to make links among related customers to identify any suspicious patterns of activity that may involve money laundering. In contrast, officials at another firm that reported having a transaction monitoring system told us that that their system involved the manual review of transactions identified by a reporting system designed to identify fraud to determine if the transactions might also involve money laundering. Similarly, some firms described having ongoing training programs specifically tailored to money laundering issues, including guidance on how to identify suspicious activities. A few firms addressed money laundering issues only as part of the orientation training provided to new employees. Industry officials noted that, in general, a firm’s vulnerability to money laundering will vary, depending upon such factors as its type of business activities, customer base, and company size. They suggested that this variance in vulnerability among firms may account for some of the observed differences in the extent and scope of voluntary anti-money laundering measures implemented by broker-dealers and mutual fund groups. Our survey results also disclosed that a relatively small number of broker- dealers and direct-marketed mutual fund groups filed SARs during calendar year 2000, although they were not legally required to do so. Specifically, 12 of 152 broker-dealer respondents and 6 of 65 mutual fund group respondents indicated that they had filed SARs. Almost all were larger firms. Most indicated that they had submitted 25 or fewer SARs during 2000, but 1 reported submitting over 200 reports during the year. An industry association official noted that, rather than filing SARs, some firms informally refer suspicious activities that may involve money laundering informally to appropriate regulatory or law enforcement authorities. Industry officials explained that firms have generally chosen to adopt voluntary anti-money laundering measures to protect themselves from becoming unwitting participants in money laundering activities. The firms hope that implementing such measures will also help to reduce the likelihood of prosecution or civil enforcement actions for violations of money laundering laws and mitigate sanctions in the event that a violation does occur. Industry trade associations encourage voluntary efforts, noting that firms are less likely to be subject to a regulatory penalty (or may have a penalty reduced) if a violation occurs when an effective compliance program is in place. Firms also believe that being associated with criminal elements or activities such as money laundering can threaten their reputation and have a tremendous impact in terms of lost business and costly legal fees. Lastly, firms note that they are taking voluntary actions in anticipation of a SAR rule for broker-dealers. Although a relatively small portion of introducing brokers reported having implemented voluntary anti-money laundering measures, many other introducing brokers reported relying on their clearing brokers to conduct anti-money laundering activities on their behalf. According to our survey, more than half of the introducing brokers indicated that they had not undertaken such efforts, relying instead on their clearing brokers (fig. 7). Almost another third reported that they had no voluntary measures of their own and did not rely on their clearing brokers to undertake such measures for them. We found that the allocation of anti-money laundering responsibilities between introducing and clearing brokers was not always clear. Of the many introducing brokers that reported relying on clearing brokers to conduct anti-money laundering activities, most did not know exactly what types of anti-money laundering activities the clearing brokers performed. Several introducing brokers indicated that they thought their clearing brokers monitored customer accounts to identify suspicious activities that could involve money laundering and would report such activities to them. Few of the introducing brokers indicated that they received regular transaction reports from their clearing brokers for anti-money laundering purposes. In addition, many of the clearing brokers responding to our survey reported that they either did not engage in voluntary anti-money laundering activities or performed them only for their own firms’ transactions, not for those of introducing brokers. As a result, some introducing brokers may have been mistaken in assuming that their clearing brokers performed anti- money laundering activities on their behalf. We were not able to determine whether any of the introducing brokers in our survey population used the clearing brokers that reported performing anti-money laundering activities. Six of the 29 clearing broker respondents that provided clearing services for other broker-dealers reported that they did not engage in any type of voluntary anti-money laundering measures. While the remaining 23 clearing broker respondents reported having voluntary anti-money laundering measures for their own trades, only about half of these firms indicated they applied the same measures to their introducing brokers’ transactions. Only a few of the clearing brokers reported that they provided other broker-dealers with transaction exception reports for anti- money laundering purposes. SEC officials explained that existing NYSE and NASD rules, which require introducing and clearing brokers to clearly delineate their respective responsibilities in a written agreement, will require them to include any expanded anti-money laundering responsibilities that will result from the issuance of a securities SAR rule in such agreements. Although most broker-dealers and direct-marketed mutual fund groups have yet to implement voluntary anti-money laundering measures, larger firms reported having done so to a greater degree than had medium-sized or small firms. Larger firns also reported having implemented anti-money laundering programs that included a broader range of measures. Specifically, from the results of our survey, we estimated that 66 percent of the 111 large broker-dealers had implemented measures that go beyond those required by applicable BSA regulations compared with 14 percent of the 1,738 small firms (table 2). An estimated 77 percent of the large direct- marketed mutual fund groups had implemented measures beyond those required, compared with 38 percent of the other mutual fund groups. Appendix VI provides information on the types of voluntary anti-money laundering measures implemented by broker-dealers and mutual fund groups, by size. The largest firms have also been the most active in implementing anti- money laundering measures. For example, 18 firms in our broker-dealer population had assets exceeding $10 billion; together, these firms held about 80 percent of the industry’s total assets as of year-end 1999. We received responses from the nine firms we surveyed in this population. According to their responses, eight of these firms had implemented voluntary anti-money laundering measures, with each reporting to have nine or more measures in place. SEC officials told us that having such measures in place at firms like these was particularly important because money launderers would likely attempt to blend their activities with those of the vast numbers of customers and transactions handled by large broker- dealers. However, SEC officials as well as industry officials representing some of the major broker-dealers and mutual fund groups acknowledged that no firms in the industry, including small and medium-sized firms, are immune to money laundering schemes. They suggested that small and medium- sized firms also need to protect themselves from being inadvertently drawn into charges of assisting with money laundering. But the officials stressed that these firms should be allowed to develop anti-money laundering programs that are commensurate with their size, available resources, and— most importantly—any identified risks of vulnerability to money laundering. For example, some small firms with an established and limited client base may know their customers well enough to be able to monitor their business transactions with little need for expensive tracking systems or formal training programs. All 25 respondents to our survey of securities subsidiaries of bank holding companies, along with an additional 14 firms identified as securities subsidiaries of depository institutions during our other industry surveys, reported having implemented anti-money laundering efforts to comply with the SAR rules to which they were subject. For example, at least 85 percent of these bank-affiliated respondents reported having written procedures for identifying and reporting suspicious activities, a formal training program, and internal audit reviews to ensure compliance with anti-money laundering policies and procedures. Most of these firms had also hired compliance staff with knowledge of and expertise in money laundering. In addition, 12 of 25 respondents that were securities subsidiaries of bank holding companies and 3 of 14 respondents that were subsidiaries of depository institutions reported having filed SARs during 2000. U.S. and foreign officials from law enforcement and financial regulatory agencies have been working together within various international forums to develop anti-money laundering standards. These standards call for participating countries to require their financial institutions, including securities firms, to take steps to prevent money laundering. Among other things, the recommended standards call for firms to identify their customers, report suspicious activities, and implement anti-money laundering programs. Many foreign countries reported having issued most or all of the recommended requirements for their financial institutions, including their securities industry, whereas efforts in the United States are still under way. However, assessing the effectiveness of the measures other countries have taken is difficult because many requirements have only recently been issued. In addition, most countries have also yet to report many cases involving financial institutions, including securities firms. Money laundering issues are the focus of several internationally active forums, including FATF, which is the largest and the most influential intergovernmental body seeking to combat money laundering. Established in 1989, FATF has 31 members, including the United States. Its activities include monitoring members’ progress in implementing anti-money laundering measures, identifying current trends and techniques in money laundering, and promoting the adoption of the organization’s standards. Many of these activities are conducted during plenary meetings attended by delegations from each member country. Smaller international groups that address money laundering issues are also able to attend FATF plenary meetings. These groups are often regional, like the Caribbean Financial Action Task Force (CFATF), which includes 25 countries from the Caribbean, Central America, and South America. Other regional forums include the Asia Pacific Group on Money Laundering and the Financial Action Task Force on Money Laundering in South America. Some international bodies have recommended countermeasures against money laundering to their members. These recommendations cover criminal justice and enforcement systems, financial systems, and mechanisms for international cooperation. Some recommendations apply specifically to financial institutions, including securities firms (table 3). Many of the countries participating in international forums reported being in compliance with the FATF recommendations relating to their financial institutions, including the securities industry. For example, 24 of the 26 FATF member countries that participated in a recent self-assessment reported having in place most of the key FATF recommendations that apply to stockbrokers. These included three FATF recommendations suggesting that stockbrokers record customers’ identity, pay attention to unusually large transactions that have no apparent economic purpose, and report suspicious activities to authorities. A fourth recommendation suggested that guidelines be issued to assist stockbrokers in detecting suspicious activities. Canada, one of the two member countries that had not implemented the specific recommendation that stockbrokers be required to report suspicious activities to competent authorities at the time of the self-assessment, has since published suspicious activity reporting regulations that cover the securities industry and are expected to come into force in November 2001. In a recent report on the anti-money laundering systems of its members, FATF observed that countries such as Canada and the United States, which have federal systems of government and a division of responsibilities for financial institutions sectors, generally take longer to implement controls for institutions regulated at the state or provincial level. CFATF officials also observed that 8 of the 11 CFATF members with organized securities exchanges had enacted legislation or adopted regulations requiring their securities firms to report suspicious transactions. The United States has applied some of the FATF recommendations to its securities industry. For example, U.S. requirements for currency reporting and funds transfers that apply to the securities industry already comply with international recommendations. According to U.S. officials, many of the existing customer identification requirements for broker-dealers in the United States also are consistent with FATF recommendations. However, the United States has not issued requirements on suspicious activity reporting and related anti-money laundering programs for the securities industry but, as previously discussed, is in the process of developing a SAR rule. Determining how well international anti-money laundering standards have been implemented around the world is difficult because of the limited amount of information available. Some countries have only recently issued anti-money laundering requirements for their financial institutions, including securities firms, and have had little time to fully implement and enforce them. In addition, FATF reports that limited law enforcement tools and resources in some countries may hinder the effective implementation and enforcement of anti-money laundering requirements. Most FATF countries have only a few years of statistics on suspicious activity reporting by banks, and few countries have data on suspicious activity reporting by other financial institutions. Only six countries provided information to FATF on SARs filed by their securities firms, and all six countries showed limited activity in the area. Specifically, securities firms filed a relatively small portion of the total SARs filed in these countries—from nearly 0 percent to just over 4 percent (table 4). In some countries, suspicious activity reporting requirements for financial institutions are relatively new, and it may be too early to judge the effectiveness of implementing these measures. As previously noted, 8 of 11 CFATF members with organized securities exchanges had implemented legislation or regulations requiring firms to report suspicious activities, but 7 did not enact these laws until 1998 or later. Similarly, FATF’s three newest members issued their anti-money laundering laws covering suspicious activity reporting requirements in 1997, 1998, and 2000. Some countries may not have the necessary enforcement tools and resources to implement anti-money laundering measures properly. FATF reported that while some member countries have sanctions in place for firms that fail to report suspicious activities indicative of money laundering, other countries do not. In the United Kingdom, for example, we were told that officers of firms that do not report suspicious activities can be sentenced to up to 15 years in jail. In general, however, FATF reports that few members have applied such sanctions. In some member countries where the regulatory framework and mechanisms for monitoring suspicious activities are in place, the resources fall short of what is needed to make full use of these systems. FATF identified limited staff resources as a particular problem that has resulted in a backlog of SARs that have not been investigated. However, these countries are planning to allocate more resources to the units responsible for collecting, analyzing, and disseminating suspicious transaction information. Most countries had not reported many money laundering cases involving nonbank financial institutions, and data on securities-specific cases are generally not available. Overall, other countries reported having much lower rates of enforcement activity related to money laundering than the United States. FATF reported that law enforcement statistics showed marked differences in the anti-money laundering activities of its member countries and in some cases indicated that members had undertaken few prosecutions or confiscations of funds. Law enforcement statistics for CFATF members also showed limited activity in the area, including few money laundering prosecutions and convictions. In contrast, the United States has reported relatively large numbers of prosecutions, convictions, confiscations, and seizure rates involving money laundering. During 1999, for example, the United States had 996 money laundering convictions, the highest number reported by any of the FATF member countries. The extent to which money laundering is occurring in the securities industry is not known, although law enforcement officials believe that various characteristics of the industry may make it a target like other financial industries. An assessment of the industry’s vulnerability must also consider the extent to which the industry is covered by anti-money laundering regulatory requirements and the actions broker-dealers and mutual fund firms themselves have taken to prevent their use by money launderers. Although firms in the securities industry are subject to criminal prosecution for facilitating money laundering and must comply with certain BSA reporting and recordkeeping requirements, all broker- dealer and mutual fund firms are not yet required to report suspicious activities that could be evidence of potential money laundering. As a result, the extent to which firms in the industry have taken steps to detect and prevent money laundering also varied. We found that many of the larger firms, which hold the majority of accounts and assets in the industry, had implemented voluntary anti-money laundering measures, but most of the small and medium-sized firms that represent the majority of broker-dealer and mutual fund firms in the industry had not. Although efforts by regulators to develop a SAR rule applicable to the securities industry are under way, they are not yet complete. As a result, regulators, broker- dealers, and mutual fund firms have more to do to further reduce the securities industry’s overall vulnerability to money laundering. We received written comments on a draft of this report from Treasury’s FinCEN and SEC. FinCEN, whose written comments appear in appendix VIII, generally agreed with the draft report. FinCEN noted that the report provides information that will be useful in identifying and evaluating the operational effects of any future anti-money laundering regulatory requirements pertaining to the securities industry. This includes FinCEN’s current efforts to promulgate a draft rule that would require registered broker-dealers to establish programs to identify and report suspicious activities. FinCEN also provided technical comments, which we incorporated in this report as appropriate. SEC, whose written comments appear in appendix IX, similarly agreed with the observations contained in the draft report and noted that the draft provided a helpful overview of issues facing the securities industry, securities regulators, and law enforcement agencies as they continue their efforts to block money laundering. In its view, SEC said that our draft report identified two insights that would be particularly helpful to the government’s continued fight against money laundering. First, because more than 90 percent of broker-dealer and mutual fund firms reported never accepting cash, SEC noted that placement of physical currency into the financial system is not a significant risk for the securities industry. Secondly, SEC’s letter highlighted that our survey results indicated that the firms responsible for most of the U.S. securities industry’s accounts, transactions, and assets have implemented a broad range of voluntary anti- money laundering measures. We agree that the larger firms were more likely to report having implemented a variety of anti-money laundering measures. We note, however, that we did not attempt to verify the information provided by firms responding to our survey. In addition, the effectiveness of firms’ anti-money laundering programs also depends on such factors as the extent of management support and the level of supervision over employees and customer activity. In its letter, SEC also noted that the implementation of an effective SAR requirement for broker- dealers—one focused on layering and integration—should help all regulators and law enforcement officials address money laundering. SEC also provided technical comments, which we incorporated in this report as appropriate. Justice provided us with informal comments in which it generally concurred with the substance of the draft report and offered a few additional observations. Justice noted, for example, that most of its enforcement efforts have focused on the large broker-dealers, leaving a significant segment of the securities industry unaddressed. It also emphasized that the opportunities for laundering illegal proceeds through on-line brokerage accounts require further scrutiny. As agreed with your office, unless you publicly release its contents earlier, we plan no further distribution of this report until 14 days from its issuance date. At that time, we will send copies of this report to interested congressional committees and members. We will also send copies to the Secretary of the Treasury, the U.S. Attorney General, and the Chairman of SEC. Copies will also be made available to others upon request. Key contributors to this report are listed in appendix X. If you have any questions, please call me at (202) 512-5431 or Cody Goebel, Assistant Director, at (202) 512-7329. To develop information on the potential for money laundering in the U.S. securities industry, we obtained the views of securities industry representatives and regulatory officials as well as the perspectives of several law enforcement agencies. At the Department of the Treasury, we spoke with officials from the Financial Crimes Enforcement Network (FinCEN), U.S. Customs Service, Internal Revenue Service, U.S. Secret Service, Office of Foreign Assets Control, and Office of Enforcement. At the Department of Justice, we spoke with officials from the Drug Enforcement Administration, Federal Bureau of Investigation, and Executive Office for U.S. Attorneys. We reviewed relevant reports, including the National Money Laundering Strategy for 2000 issued by Treasury and the U.S. Attorney General, International Narcotics Control Strategy Report issued by the U.S. Department of State, and Report on Money Laundering by the International Organization of Securities Commissions. We also conducted an independent legal search of cases involving money laundering through the securities industry and reviewed indictments, news articles, and other supporting documentation (provided primarily by the Internal Revenue Service and the Executive Office for U.S. Attorneys) to identify relevant cases. To describe the anti-money laundering legal framework applicable to the U.S. securities industry and related regulatory oversight, we interviewed officials at the Securities and Exchange Commission (SEC), New York Stock Exchange (NYSE), National Association of Securities Dealers (NASD), Federal Reserve Board, Office of the Comptroller of the Currency, and Office of Thrift Supervision. We also reviewed U.S. anti-money laundering laws, rules, and regulations; accompanying congressional records; SEC and self-regulatory organization (SRO) examination procedures covering compliance with Bank Secrecy Act (BSA) requirements and related anti-money laundering guidance; semiannual reports to Treasury summarizing SEC and SRO examination findings pertaining to BSA, SEC correspondence on anti-money laundering issues, and other relevant documentation. To determine the nature of the anti-money laundering efforts of broker- dealers and mutual funds, we interviewed industry officials at their respective companies and held roundtable discussions with panels of industry officials representing some of the nation’s major broker-dealer and mutual fund firms. We also spoke with representatives of industry trade associations, such as the Securities Industry Association and Investment Company Institute, and reviewed available reports and other documents covering money laundering issues relative to the securities industry. To determine the extent to which firms were undertaking anti-money laundering activities, we also surveyed representative probability samples of broker-dealers and mutual funds. For our survey of broker-dealers, our target population was all broker-dealers conducting a public business, including firms that carry customer accounts, clear trades, or serve as introducing brokers. These firms were selected because their activities may expose them to potential money laundering, unlike brokers who do not conduct transactions for customers. For our survey of mutual fund firms, our target population was direct-marketed, no-load mutual fund families that sell shares directly to investors and would have some anti- money laundering responsibilities because of their direct contact with customers. The majority of other mutual funds are sold by other financial institutions, such as broker-dealers, banks, and insurance companies, and these entities would have the contact with customers potentially seeking to launder money. Our representative probability samples included three groupings: (1) broker-dealers, (2) securities subsidiaries of bank holding companies and foreign banking organizations, and (3) mutual funds. For each grouping, we used survey data to estimate what types of monetary instruments are accepted and what anti-money laundering activities are conducted, including voluntary measures such as implementing written anti-money laundering procedures to identify noncash suspicious activities, establishing related internal controls, providing personnel training, and filing suspicious activity reports (SAR). Appendix II is an example of one of our survey instruments. Our three statistically valid random samples were drawn so that each sampled firm had a known, nonzero probability of being included in our survey. In the broker-dealer and mutual fund surveys, the samples were allocated across several categories, or strata, defined by the size of the firm, so that proportionally more of the sample was allocated to the strata with larger firms. This makes our estimates of anti-money laundering activity, which tends to vary by size of firm, more precise. To produce the estimates from this survey, answers from each responding firm were weighted in the analysis to account for the different probabilities of selection by stratum and to make our results representative of all the members of the population, including those that were not selected or did not respond to the survey. For our survey of broker-dealers, our target population was all broker- dealers conducting a public business, including firms that carry customer accounts, clear trades, or serve as introducing brokers. Using these specifications, we requested year-end 1999 financial and operational reports filed with SEC by 5,460 firms. From this list, we eliminated 1,626 NASD-member firms not conducting a public business and carrying or clearing trades, such as those that act as floor brokers on the various exchanges or that sell mutual funds, direct participation plans, or units of mutual funds, but that nevertheless file broker-dealer reports with SEC. We also removed the 53 section 20 subsidiary firms identifiable in the dataset at that time, resulting in a population of 3,781 broker-dealers for sampling purposes. Table 5 provides additional information on the selected characteristics of this broker-dealer population. From these 3,781 firms, we drew a random probability sample of 231 broker-dealers. We distributed that sample over three size strata defined by total assets of the firms. For our survey of mutual funds, our target population was direct-marketed funds whose shares are sold directly to retail or institutional customers. We developed a list of 363 of these direct-marketed mutual fund groups from lists of no-load mutual fund families (those fund complexes most likely to distribute shares of their funds themselves) from publications widely available at the time of our survey. We drew a random sample of 92 fund families across two strata defined by the year-end 1998 asset size of firms. Some broker-dealers and mutual fund groups have been subject to additional money laundering regulation because of their affiliation with banks or other depository institutions—specifically, as subsidiaries of depository institutions or of their holding companies. As a result, we attempted to remove this group from our overall survey population of broker-dealers and administered a separate survey for such bank-affiliated brokers. However, we were unable to develop a survey population that included all securities subsidiaries of depository institutions or of their holding companies because comprehensive data on the extent and the identities of all such subsidiaries were not available. However, the Federal Reserve maintained data on the securities subsidiaries of the bank holding companies and foreign banking organizations that it oversees. These bank-affiliated broker-dealers were subject to banking SAR requirements. As of December 31, 1999, the Federal Reserve oversaw 53 of these firms, and we randomly selected 37 of them for our survey. We contacted all sampled firms by telephone to determine their eligibility for the survey and to determine who in each firm should receive the questionnaire. We sent questionnaires primarily by fax to firms beginning in December 2000. We made telephone follow-up contacts to some of the firms that did not respond within several weeks, to encourage them to return a completed questionnaire or to answer our questions by telephone. We stopped collecting completed questionnaires in April 2001. We used several variants of the questionnaires tailored to each industry and to whether the firm was subject to the banking SAR requirements. Although we conducted follow-up work with some of the respondents to clarify their responses and obtain additional information, we did not systematically verify the accuracy of survey responses or the extent to which firms were adhering to reported policies and procedures. We received 164 usable broker-dealer responses, 67 mutual fund responses, and 25 responses from securities subsidiaries of bank holding companies. After adjusting for those sampled firms that were discovered to be ineligible for our survey because they were no longer independent entities in their respective industries, the number of usable responses resulted in final response rates of 87 percent, 83 percent, and 69 percent, respectively (tables 6 to 8). Because not all respondents provided an answer to each question that they were eligible to answer, the item response rate varies and is generally lower than the unit response rate for each industry. During the course of administering the surveys of broker-dealers and mutual fund groups, we identified 12 broker-dealers and 2 mutual fund groups that indicated that they were subject to the banking SAR requirements. Responses of these 14 firms were analyzed in conjunction with responses of the securities subsidiaries of institutions overseen by the Federal Reserve. Point estimates from sample surveys are subject to a number of sources of error, which can be grouped into the following categories: sampling error, coverage error, nonresponse error, measurement error, and processing error. We took a number of steps to limit these errors. Sampling error exists because our random sample is only one of a large number of samples that we might have drawn. Since each sample could have produced a different estimate, we express the precision of our particular sample's results as a 95-percent confidence interval. This is the interval (e.g., ±7 percentage points on either side of the percentage estimate) that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95-percent confident that each of the confidence intervals cited in this report will include the true values in the study population. Surveys may also be subject to coverage error, which occurs when the sampling frame does not fully represent the target population of interest. We used the most up-to-date lists that were available to us, and we attempted to remove firms that were no longer in the industry of interest. For the mutual fund survey, our results are representative only of those mutual fund groups that are direct marketed and that offer predominantly no-load funds, which we believe is closest to the target population of mutual funds that are self-distributing. Also, we discovered a small number of firms in our broker-dealer sample that was affiliated with depository institutions and subject to banking SAR requirements. These firms would have been excluded from our overall broker-dealer sample frame had we known this before conducting our survey; their responses were analyzed with those from our sample frame for bank-affiliated securities subsidiaries, which represented broker-dealers subject to banking SAR requirements. Measurement errors are defined as differences between the reported and true values. Such errors can arise from the way that questions are worded, differences in how questions are interpreted by respondents, deficiencies in the sources of information available to respondents, or intentional misreporting by respondents. To minimize such errors, we asked subject matter experts to review our questionnaires before the survey, and pretested the questionnaires by telephone with respondents at several firms of various sizes and levels of anti-money laundering activity. Nonresponse error arises when surveys are unsuccessful in obtaining any information from eligible sample elements or fail to get valid answers to individual questions on returned questionnaires. To the extent that those not providing information would have provided significantly different information from those that did respond, bias from nonresponse can also result. Because the seriousness of this type of error is often proportional to the level of missing data, response rates are commonly used as indirect measures of nonresponse error and bias. We took steps to maximize response rates, such as sending multiple faxes of the questionnaires and making several telephone follow-ups to convert nonrespondents. Finally, surveys may be subject to processing error in data entry, processing, and analysis. We verified the accuracy of a small sample of keypunched records by comparing them with their corresponding questionnaires, and we corrected any errors found. Less than 1 percent of the data items we checked had random keypunch errors that would not have been corrected during data processing. Analysis programs were also independently verified. We conducted follow-up work with many of the respondent firms to obtain additional information on or clarification of their survey responses. We also worked with FinCEN to corroborate survey responses on the extent that securities firms have filed SARs using procedures that attempted to maintain the confidentiality of the identities of our survey respondents. To obtain information on international efforts aimed at addressing money laundering in the securities industry, we interviewed members of the U.S. delegation to the Financial Activities Task Force (FATF), officials of the Caribbean Financial Activities Task Force (CFATF), and representatives of the U.S. Department of State. We also spoke with foreign officials representing the financial supervising authorities, law enforcement or financial intelligence units, prosecuting offices, and securities industries in Barbados, Germany, Trinidad and Tobago, and the United Kingdom. In addition, we interviewed knowledgeable representatives at the U.S. embassies located in these jurisdictions. We reviewed FATF and CFATF annual reports; summaries of mutual evaluations, self-assessments, and the results of plenary meetings; documents provided by countries we visited on their anti-money laundering oversight and law enforcement efforts; and relevant reports issued by various international working groups and committees. Lastly, we researched the Web sites of selected foreign financial regulators and reviewed available documentation on their anti- money laundering regulations, policies, and industry guidelines. Information on foreign anti-money laundering laws or regulations is based on interviews and secondary sources and does not reflect our independent legal analysis. We conducted our work between May 2000 and May 2001 in accordance with generally accepted government auditing standards. One of the reasons that the U.S. securities industry is seen as potentially attractive to money launderers is the rapid growth in securities activities in the United States. As shown in figure 8, the value of securities traded on the NYSE and the Nasdaq markets has grown significantly since 1990. The number of shares traded on these two major U.S. markets has also increased during the 1990s (fig. 9). In addition to the increase in stock trading, mutual funds have also experienced considerable growth in the 1990s. As shown in figure 10, assets in mutual funds exceeded $7 trillion in 2001. Securities markets are now more accessible to investors with the advent of on-line trading accounts that allow investors to open accounts and send transaction instructions to broker-dealers using the Internet. As shown in figure 11, research staff for one broker-dealer reported that the number of on-line brokerage accounts was close to 20 million at the end of 2000. Although many countries have active securities markets, trading on markets in the United States continues to represent the majority of trading on all large markets (fig. 12). To obtain information on the extent to which the U.S. securities industry was used to launder money as well as the vulnerability of the industry to money laundering, we contacted various U.S. law enforcement agencies. Within Treasury, we contacted the Customs Service, Internal Revenue Service’s Criminal Investigation Division, Secret Service, FinCEN, Office of Foreign Assets Control, and Office of Enforcement. At Justice, we spoke with officials from the Drug Enforcement Administration, Federal Bureau of Investigation, and Executive Office for U.S. Attorneys. Statistics on the number of cases in which money was laundered through brokerage firms and mutual funds were not readily available, but we compiled a listing of cases in which illegal funds were laundered through brokerage firms or mutual funds from information provided primarily by two of the law enforcement agencies we contacted. At our request, the Internal Revenue Service and Executive Office for U.S. Attorneys collected information from some of its field staff that identified about 15 criminal or civil forfeiture cases since 1997 that involved money laundering through brokerage and mutual fund accounts. Some cases in which money laundering is alleged involved securities fraud or crimes committed by securities industry employees who then moved their illegally earned proceeds to other institutions or used them to purchase other assets, thus violating the anti-money laundering statutes. However, we only included such cases if broker-dealer or mutual fund accounts were alleged to have been used to launder the money. Table 9 provides a list of criminal cases in which proceeds from illegal activities were laundered through brokerage or mutual fund accounts. Table 10 provides a list of forfeiture cases in which property, including assets held in brokerage or mutual fund accounts, obtained from proceeds that were traceable to certain criminal offenses were taken by the United States to be distributed to the victims of such crimes as restitution. These lists contain examples of cases that involve charges of money laundering through brokerage or mutual fund accounts and do not represent an exhaustive compilation of all such known cases. For example, law enforcement officials indicated that they were unable to provide information on many relevant pending cases in the area and further emphasized that not all field offices and staff had been formally queried. Specific case information presented in the tables was extracted from public documents provided primarily by the Internal Revenue Service and Executive Office for U.S. Attorneys. The overall average dollar size of individual transactions processed for retail customers among firms responding to our survey was $22,000 for broker-dealers and $11,000 for mutual funds groups, as shown in table 11. The median, or middle value for the full range of responses, was substantially lower than the average for each of the two types of firms. The combined range of the average transactions varied widely—from $200 to $200,000. For those broker-dealers that indicated that they had voluntary anti-money laundering measures in place, large broker-dealers tended to implement more of the anti-money laundering tools and processes than the medium- sized or small firms, as shown in figure 13. Implementation of the other types of voluntary anti-money laundering measures varied. For those mutual fund groups that indicated they had voluntary anti-money laundering measures in place, large mutual funds reported implementing more of the various voluntary anti-money laundering measures, but medium-sized and small funds have also implemented many of the same measures (see fig. 14). CFATF is the Caribbean basin counterpart of FATF that works to assist member governments in implementing anti-money laundering mechanisms. Its 25 members are countries and territories from the Caribbean, Central America, and South America. CFATF was created as the result of meetings held in the early 1990s by representatives of the Western Hemisphere countries to develop a common approach to combat the laundering of drug trafficking proceeds. In 1992, CFATF developed 19 recommendations on the basis of this common approach, which have specific relevance to the region and complement the 40 recommendations of FATF. Member governments have signed a memorandum of understanding, known as the Kingston Declaration on Money Laundering, which confirms their agreement to adopt and implement various internationally accepted standards and recommendations and CFATF’s 19 regionally focused recommendations. The CFATF Secretariat monitors members’ implementation of the Kingston Declaration through various mechanisms, including self-assessment questionnaires, mutual evaluations of anti-money laundering regimes, training technical assistance programs, and plenary and Ministerial meetings. According to CFATF officials, most CFATF jurisdictions that have organized securities exchanges require their securities firms to report suspicious transactions; however, almost all of these requirements were enacted recently. CFATF officials noted that 11 member jurisdictions (i.e., the Bahamas, Barbados, Bermuda, the Cayman Islands, Costa Rica, the Dominican Republic, Jamaica, Nicaragua, Panama, Trinidad and Tobago, and Venezuela) have at least 1 organized securities exchange. Eight of these members have enacted legislation requiring their securities firms to report suspicious transactions to relevant authorities (fig. 15). CFATF officials noted that, with the exception of Panama, which has required its securities firms to report suspicious transactions since 1995, the remaining seven CFATF members with organized securities exchanges enacted such requirements only since 1998. The Bahamas and Trinidad and Tobago, for example, enacted anti-money laundering legislation in 2000 requiring, among other things, securities firms to report suspicious transactions to relevant authorities; Jamaica enacted similar legislation in 1999. Although CFATF officials indicated that most CFATF jurisdictions require their financial institutions to report suspicious transactions, U.S. and international anti-money laundering authorities have criticized legislation and implementation in some CFATF jurisdictions. Treasury documents based on the mutual evaluations of CFATF members’ activities indicate that anti-money laundering results in the region are very limited, noting that few cases of money laundering have actually been prosecuted or convicted. A June 2000 FATF report cited six CFATF jurisdictions as having significant deficiencies in their anti-money laundering systems and labeled them as “non-cooperative countries and territories.” In addition, during 2000, FinCEN issued a series of advisories to U.S. businesses describing deficiencies in the anti-money laundering systems of six CFATF jurisdictions, including three jurisdictions with organized securities exchanges. FinCEN reported in a July 2000 advisory, for example, that the Bahamas did not require its financial institutions to report suspicious activities and that, although the Cayman Islands did have this reporting requirement, it lacked any sanctions for financial institutions that failed to comply. FinCEN also criticized the effectiveness of Panama’s suspicious transaction reporting procedures that allowed the Office of the President of Panama to screen reports before referring them for investigation. However, in 2001, FATF removed the Bahamas, the Cayman Islands, and Panama from its list of noncooperative countries, noting that they had adequately addressed their deficiencies through legislative reforms and implementation efforts. FinCEN also retracted its advisories on the basis of the improvements made by these jurisdictions. Securities regulators with whom we spoke in Barbados and Trinidad and Tobago believed that the Caribbean securities markets would likely not be appealing to money launderers and other criminals because of their small size and low trading volumes. For example, compared with over 7,000 registered broker-dealers in the United States, regulatory officials stated that the Barbados stock exchange has only 17 member-brokers and the local securities market in Trinidad and Tobago has only 5 participating brokers. They also explained that trading activity in these markets is limited to 2 days a week in Barbados and 3 days a week in Trinidad and Tobago. Finally, regulatory officials believed that the small size of these markets would make it relatively easy to detect any unusual or suspicious activities. Law enforcement officials and financial experts in the CFATF jurisdictions we visited considered other sectors of Caribbean economies more vulnerable to money laundering than the securities industry, citing, as an example, the increased use of local commercial businesses to launder money. Trinidad and Tobago law enforcement officials, for one, stated that they were aware of specific cases in which drug dealers invested in legitimate businesses such as supermarkets for the sole purpose of laundering illicit funds. Barbados authorities also stated that they were aware of money laundering through businesses engaged in the import or export of goods, sometimes involving high-volume, cash sales. In addition to those named above, Evelyn E. Aquino, Emily R. Chalmers, Bradley D. Dubbs, Tonita W. Gilllich, May M. Lee, Christine J. Kuduk, Carl M. Ramirez, and Sindy R. Udell made key contributions to this report. Money Laundering: Oversight of Suspicious Activity Reporting at Bank- Affiliated Broker-Dealers Ceased (GAO-01-474), Mar. 22, 2001. Suspicious Banking Activities: Possible Money Laundering by U.S. Corporations for Russian Entities (GAO-01-120), Oct. 31, 2000. Money Laundering: Observations on Private Banking and Related Oversight of Selected Offshore Jurisdictions (GAO-T-GGD-00-32), Nov. 9, 1999. Private Banking: Raul Salinas, Citibank, and Alleged Money Laundering (GAO/T-OSI-00-3), Nov. 9, 1999. Private Banking: Raul Salinas, Citibank, and Alleged Money Laundering (GAO/OSI-99-1), Oct. 30, 1998. Money Laundering: Regulatory Oversight of Offshore Private Banking Activities (GAO/GGD-98-154), June 29, 1998. Money Laundering: FinCEN’s Law Enforcement Support Role Is Evolving (GAO/GGD-98-117), June 19, 1998. Money Laundering: FinCEN Needs to Better Manage Bank Secrecy Act Civil Penalties (GAO/GGD-98-108), June 15, 1998. Money Laundering: FinCEN’s Law Enforcement Support, Regulatory, and International Roles (GAO/GGD-98-83), Apr. 1, 1998. Money Laundering: FinCEN Needs to Better Communicate Regulatory Priorities and Timelines (GAO/GGD-98-18), Feb. 6, 1998. Private Banking: Information on Private Banking and Its Vulnerability to Money Laundering (GAO/GGD-98-19R), Oct. 30, 1997. Money Laundering: A Framework for Understanding U.S. Efforts Overseas (GAO/GGD-96-105), May 24, 1996. Money Laundering: U.S. Efforts to Combat Money Laundering Overseas (GAO/T-GGD-96-84), Feb. 28, 1996. Money Laundering: Stakeholders View Recordkeeping Requirements for Cashier’s Checks as Sufficient (GAO/GGD-95-189), July 25, 1995. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to our home page and complete the easy-to-use electronic order form found under “To Order GAO Products.” Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: [email protected], or 1-800-424-5454 (automated answering system). | To disguise illegally obtained funds, money launderers have traditionally targeted banks, which accept cash and arrange domestic and international fund transfers. However, criminals seeking to hide illicit funds may also be targeting the U.S. securities markets. Although few documented cases exist of broker-dealer or mutual fund accounts being used to launder money, law enforcement agencies are concerned that criminals may increasingly try to use the securities industry for that purpose. Most broker-dealers or firms that process customer payments for mutual funds are subject to U.S. anti-money laundering requirements. However, unlike banks, most of these firms are not required to report suspicious activities. The Treasury Department is now developing a rule requiring broker-dealers to report suspicious activities. Treasury expects that the rule will be issued for public comment by the end of this year. Various intergovernmental groups, such as the Financial Action Task Force, have been working on recommendations that call for member nations to take various steps to combat money laundering through their financial institutions, including requiring securities firms to report suspicious activities. Although many members countries report that they have issued all or many of these recommendations and have applied them to their securities firms, it is difficult to determine how well the measures are being implemented and enforced. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
With the passage of the Aviation and Transportation Security Act (ATSA) in November 2001, TSA assumed from the Federal Aviation Administration (FAA) the majority of the responsibility for civil aviation security, including the commercial aviation system. ATSA required that TSA screen 100 percent of checked baggage using explosive detection systems by December 31, 2002. As it became apparent that certain airports would not meet the December 2002 deadline, the Homeland Security Act of 2002 in effect extended the deadline to December 31, 2003, for noncompliant airports. Under ATSA, TSA is responsible for the procurement, installation, and maintenance of explosive detection systems used to screen checked baggage for explosives. Airport operators and air carriers continued to be responsible for processing and transporting passengers’ checked baggage from the check-in counter to the airplane. Explosive detection systems include EDS and ETD machines (fig. 1). EDS uses computer-aided tomography X-rays adapted from the medical field to automatically recognize the characteristic signatures of threat explosives. By taking the equivalent of hundreds of X-ray pictures of a bag from different angles, EDS examines the objects inside of the baggage to identify characteristic signatures of threat explosives. TSA has certified, procured, and deployed EDS manufactured by three companies—L-3 Communications Security and Detection Systems (L-3); General Electric InVision, Inc. (GE InVision); and Reveal Imaging Technologies, Inc. (Reveal). In general, EDS is used for checked baggage screening. ETD machines work by detecting vapors and residues of explosives. Human operators collect samples by rubbing bags with swabs, which are then chemically analyzed in the ETD machine to identify any traces of explosive materials. ETD machines are used for both checked baggage and passenger carry-on baggage screening. TSA has certified, procured, and deployed ETD machines from three manufacturers, Thermo Electron Corporation, Smiths Detection, and General Electric Company. TSA’s EDS and ETD maintenance contracts provide for preventative and corrective maintenance. Preventative maintenance includes scheduled activities, such as changing filters or cleaning brushes, to increase machine reliability and are performed monthly, quarterly, or yearly based on the contractors’ maintenance schedules. Corrective maintenance includes actions performed to restore machines to operating condition after failure, such as repairing the conveyer belt mechanism after a bag jams the machine. TSA is responsible for EDS and ETD maintenance costs after warranties on the machines expire. From June 2002 through March 2005, Boeing was the prime contractor primarily for the installation and maintenance of EDS and ETD machines at over 400 U.S. airports. TSA officials stated that the Boeing contract was awarded at a time when TSA was a new agency with many demands and extremely tight schedules for meeting numerous congressional mandates related to passenger and checked baggage screening. The cost reimbursement contract entered into with Boeing had been competitively bid and contained renewable options through 2007. Boeing subcontracted for EDS maintenance through firm-fixed-price contracts with the original EDS manufacturers, GE InVision and L-3, which performed the maintenance on their respective EDS. Boeing subcontracted for ETD maintenance through a firm-fixed-price contract with Siemens. Consistent with language in the fiscal year 2005 House Appropriations Committee report and due to TSA’s acknowledgment of Boeing’s failure to control costs, TSA received DHS authorization to negotiate new EDS and ETD maintenance contracts in January 2005. In March 2005, TSA signed firm-fixed-price contracts for EDS and ETD maintenance. TSA awarded a competitively bid contract to Siemens to provide maintenance for ETD machines. According to TSA, it negotiated sole source contracts with L-3 and GE InVision for maintaining their respective EDS because they are the original equipment manufacturers and owners of the intellectual property rights of their respective EDS. In September 2005, TSA awarded a competitively bid firm-fixed-price contract to Reveal for both the procurement and maintenance of a reduced size EDS. TSA obligated almost $470 million from fiscal year 2002 through fiscal year 2005 for EDS and ETD maintenance, according to TSA budget documents. In fiscal year 2006, TSA estimates it will spend $199 million and has projected it will spend $234 million in fiscal year 2007. According to TSA officials, in fiscal year 2004, TSA requested and received approval to reprogram about $32 million from another account to EDS/ETD maintenance due to higher levels of maintenance costs than expected. Similarly, in fiscal year 2005, TSA requested and received approval to reprogram $25 million to fund the L-3 contract and to close out the Boeing contract. TSA was not able to provide us with data on the maintenance cost per machine before fiscal year 2005 because, according to TSA officials, TSA’s previous contract with Boeing to maintain EDS and ETD machines was not structured to capture these data. Table 1 identifies the maintenance costs by type of EDS and ETD machine for fiscal years 2005 and 2006. TSA did not provide us with projections of EDS and ETD maintenance costs beyond 2007. TSA officials told us that future costs will be influenced by the number, type, quantity, and locations of machines necessary to support system configurations at airports, such as the extent to which EDS are integrated with airport baggage conveyor systems or are operated in stand-alone modes. Further, TSA officials told us that future EDS and ETD maintenance costs are dependent on decisions related to the deployment of new technologies and the refurbishment of existing equipment, among other things. The current contracts would have negotiated maintenance prices per machine through March 2009, if TSA decides to exercise option years in the contracts. We identified different factors that have played a role in costs to date and that will influence future maintenance costs for EDS and ETD machines. According to a September 2004 DHS OIG report, TSA did not follow sound contracting practices in administering the Boeing contract, which was primarily for the installation and maintenance of EDS and ETD machines. According to DHS OIG officials, TSA’s failure to control costs under the Boeing contract, including the lack of sound contracting practices, contributed to increases in maintenance costs. Among other things, the DHS OIG report stated that TSA had paid provisional award fees totaling $44 million through December 2003 without any evaluation of Boeing’s performance. In response to the DHS OIG, TSA agreed to recover any excessive award fees paid to Boeing, if TSA determined that such fees were not warranted. In commenting on our draft report in July 2006, DHS stated that TSA has conducted a contract reconciliation process to ensure that no fees would be paid on costs that exceeded the target due to poor contractor performance. Further, DHS stated that TSA and Boeing had reached an agreement in principle on this matter and that the documentation was in the approval process with closure anticipated in July 2006. In its report accompanying the DHS Appropriations Bill for fiscal year 2007, the House Appropriations Committee stated its need for a report from TSA on any actions it has taken to collect excessive award fees, how much of the fees have been received to date, and specific plans to obligate these collections and cited TSA’s plans to use any cost recoveries to purchase and install additional EDS. These actions were based on the committee’s long-standing concerns about the increasing costs for EDS and ETD maintenance. In addition to matters related to the Boeing contract, TSA officials stated that another factor contributing to cost increases were the larger than expected number of machines that came out of warranty and their related maintenance costs. According to TSA officials, they were not able to determine the cost impact of these additional machines because the Boeing contract was not structured to provide maintenance costs for individual machines. With regard to future EDS and ETD maintenance costs under firm-fixed- price contracts, maintenance costs per machine will increase primarily by an annual escalation factor in the contracts that takes into account the employment cost index and the consumer price index, if TSA decides to exercise contract options. In addition, future maintenance costs may be affected by a range of factors, including the number of machines deployed and out of warranty, conditions under which machines operate, contractor performance requirements, the emergence of new technologies or improved equipment, and alternative screening strategies. Lastly, life-cycle cost estimates were not developed for the Boeing, Siemens, L-3, and GE InVision contracts before the maintenance contracts were executed, and, as a result, TSA did not have a sound estimate of maintenance costs for all the years the machines are expected to be in operation. In August 2005, TSA hired a contractor to define parameters for a life-cycle cost model, among other things. This contract states that TSA and the contractor will work together to ensure that the full scope of work is planned, coordinated, and executed according to approved schedules. In commenting on our draft report in July 2006, DHS stated that the TSA contractor estimated completing a prototype life-cycle cost model by September 2006. Further, DHS stated that TSA’s evaluation of the prototype would begin immediately upon delivery and that full implementation of an EDS life-cycle cost model would be completed within 12 months after the prototype had been approved. According to a TSA official, the life-cycle cost model would be useful in determining machine reliability and maintainability and to inform future contract negotiations, such as when to replace a machine versus continuing to repair it. We identified several actions TSA has taken to control EDS and ETD maintenance costs. First, TSA entered into firm-fixed-price contracts starting in March 2005 with maintenance contractors, which offer TSA certain advantages over cost reimbursement contracts because price certainty is guaranteed for up to 5 years if TSA exercises options to 2009. Also, TSA included several performance requirements in the Siemens, L-3, GE InVision, and Reveal contracts, including the collection of metrics related to machine reliability, maintainability, and availability and required specific cost data related to maintenance and repair. TSA officials told us that these data will assist them in monitoring the contractor performance as well as informing future contract negotiations for equipment and maintenance. These contracts also stipulate that maintenance contractors meet monthly with TSA to review all pertinent technical schedules and cost aspects of contracts. TSA also incorporated provisions in the L-3 and GE InVision contracts to specify that the agreed price for maintaining EDS would be paid only if the contractor performs within specified mean downtime (MDT) requirements. Contractors submit monthly invoices for 95 percent of the negotiated contract price for the month and then submit a MDT report to justify the additional 5 percent. Consequently, if the contractor fails to fulfill the MDT requirements, it is penalized 5 percent of the negotiated monthly maintenance price. As of February 2006, neither GE InVision nor L-3 had been penalized for missing their MDT requirements. The allowable MDT is lowered from 2005 to subsequent renewable years in the contract, as shown in table 2. With regard to TSA’s oversight of EDS and ETD contractor performance, TSA’s acquisition policies and GAO standards for internal controls call for documenting transactions and other significant events, such as monitoring contractor activities. The failure of TSA to develop internal controls and performance measures has been recognized by other GAO and DHS OIG reviews. TSA has policies and procedures for monitoring its contracts and has included contractor performance requirements in the current EDS and ETD maintenance contracts. However, TSA officials provided no evidence that they are reviewing maintenance cost data provided by the contractor because they are not required to document such activities. For example, even though TSA officials told us that they are reviewing required contractor data, including actual maintenance costs related to labor hours and costs associated with replacing and shipping machine parts, they did not have any documentation to support this. TSA officials told us that they have begun to capture these data to assist them in any future contract negotiations. Further, TSA officials provided no evidence that performance data for corrective and preventative maintenance required under contracts are being reviewed. TSA officials told us that they perform such reviews, but do not document their activities since there are no TSA policies or procedures requiring them to do so. Therefore, TSA could not provide assurance that contractors are complying with contract performance requirements. For example, although TSA documents monthly meetings with contractors to discuss performance data, TSA officials did not provide evidence that they independently determine the reliability and validity of data required by the contracts, such as mean time between failures and mean time to repair, which are important to making informed decisions about future purchases of EDS and ETD equipment and their associated maintenance costs. Further, TSA officials provided no evidence that they ensure that contractors are performing scheduled preventative maintenance. TSA officials told us that they review the contractor- submitted data to determine whether contractors are fulfilling their contractual obligations, but do not document their activities because there are no TSA policies or procedures to require such documentation. Additionally, for EDS contracts with possible financial penalties, TSA officials told us that they review contractor-submitted mean downtime data on a monthly basis to determine the reliability and validity of the data and to determine whether contractors are meeting contract provisions or should be penalized. However, TSA officials do not document these activities because there are no TSA policies or procedures requiring them to do so. As a result, without adequate documentation, there is no assurance as to whether or not contractors are meeting contract provisions or that TSA has ensured that it is making appropriate payments for services provided. The cost of maintaining checked baggage-screening equipment has increased as more EDS and ETD machines have been deployed and warranties expire. TSA’s move in March 2005 to firm-fixed-price maintenance contracts for EDS and ETD maintenance was advantageous to the government in that it helps control present and future maintenance costs. Firm-fixed-price contracts also help ensure price certainty and therefore are more predictable. However, unresolved issues remain with the past contractor, specifically fees awarded to former contractor Boeing that may have been excessive due to a lack of timely evaluation of the contractor’s performance. The House Appropriations Committee has expressed concern about these unresolved issues; specifically, what actions TSA has taken to recover these excessive fees, and the extent to which any collections might impact future TSA obligations. Closing out the Boeing contract is essential to resolving these issues. In responding to our draft report, DHS stated that the completion of an EDS life-cycle cost is over a year away. Absent such a life-cycle cost model, TSA may not be identifying cost efficiencies and making informed procurement decisions regarding the future purchase of EDS and ETD machines and maintenance contracts. Further, TSA must provide evidence of its reviews and analyses of contractor-submitted data and perform analyses of contractor data to determine the reliability and validity of the data and to provide assurance of compliance with contract performance requirements and internal control standards. Without stronger oversight, TSA will not have reasonable assurance that contractors are performing as required and that full payment is justified based on meeting mean downtime requirements. To help improve TSA’s management of EDS and ETD maintenance costs and strengthen oversight of contract performance, we recommend that the Secretary of Homeland Security instruct the Assistant Secretary, Transportation Security Administration, to take the following three actions: establish a timeline to complete its evaluation and close out the Boeing contract and report to congressional appropriations committees on its actions, including any necessary analysis, to address the Department of Homeland Security Office of Inspector General’s recommendation to recover any excessive fees awarded to Boeing Service Company; establish a timeline for completing life-cycle cost models for EDS, which TSA recently began; and revise policies and procedures to require documentation of the monitoring of EDS and ETD maintenance contracts to provide reasonable assurance that contractor maintenance cost data and performance data are recorded and reported in accordance with TSA contractual requirements and self-reported contractor mean downtime data are valid, reliable, and justify the full payment of the contract amount. We provided a draft of this report to DHS for its review and comment. On July 24, 2006, we received written comments on the draft report. DHS, in its written comments, concurred with our findings and recommendations, and agreed that efforts to implement these recommendations are essential to a successful explosive detection systems program. DHS stated that it has initiated efforts to improve TSA’s management of EDS and ETD maintenance costs and strengthen oversight of contract performance. Regarding our recommendation that TSA establish a timeline to close out the Boeing contact and report to congressional committees on its actions to recover any excessive fees, DHS stated that TSA has conducted a contract reconciliation process to ensure that no fees would be paid on costs that exceeded the target due to poor contractor performance and that Boeing and TSA have reached an agreement in principle on this matter and the documentation is in the approval process with closure anticipated in July 2006. Regarding our recommendation to establish a timeline for completing the EDS life-cycle cost model, DHS stated that TSA expects to complete its prototype evaluation in September 2006 and that the EDS life-cycle cost model will be completed 12 months after the prototype has been approved. Regarding our recommendation to revise TSA policies and procedures to require documentation of its monitoring of EDS and ETD maintenance contracts, DHS stated that a TSA contractor is developing automated tools to perform multiple analyses of contractor- submitted data that DHS said would allow TSA to accurately and efficiently certify the contractors’ performance against their contractual requirements and would allow TSA to independently validate and verify maintenance and cost data. The department’s comments are reprinted in appendix II. We will send copies of this report to the Secretary of Homeland Security and the Assistant Secretary, Transportation Security Administration, and interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions or need additional information, please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are acknowledged in appendix III. H.R. Conf. Rep. No. 109-241, at 52 (2005). TSA interprets the term explosive detection system to include both explosive detection systems (EDS) and explosive trace detection (ETD) machines. maintenance of explosive detection systems (EDS) and explosive trace detection (ETD) machines? What factors played a role in EDS and ETD maintenance costs and what factors could affect future costs? What has TSA done to control EDS and ETD maintenance costs? To what extent does TSA oversee the performance of EDS and ETD maintenance contractors? To determine TSA costs to maintain EDS and ETD machines we reviewed TSA contract files and budget documents for fiscal years 2003 through 2007, and interviewed TSA headquarters officials, Department of Homeland Security Office of the Inspector General (DHS OIG) officials, and EDS and ETD contractor representatives. For purposes of our review, we focused on the amounts obligated under contracts to maintain the machines. We did not review TSA’s negotiations for maintenance services or the process for awarding contracts, nor did we assess other direct or indirect costs related to TSA or DHS employees engaged in contract administration or other related items. To determine what factors played a role in maintenance costs and what TSA has done to control costs, we: reviewed TSA contract files, acquisition and strategic plans, budget documents, TSA processes for reviewing contract cost and performance data, and a DHS OIG report;and interviewed TSA headquarters officials, DHS OIG officials, and EDS and ETD contractor representatives. To determine the extent of TSA contract oversight, we: reviewed TSA contract files and processes for reviewing contract performance data, interviewed TSA headquarters officials and EDS and ETD contractor representatives, and reviewed GAO standards for internal controls We performed our work from January 2006 through June 2006 in accordance with generally accepted government auditing standards. GAO, Standards for Internal Control in the Federal Government, GAO/AIMD-00-21.3.1 (Washington, D.C.: November 1999). According to TSA budget documents, TSA has obligated almost $470 million from fiscal year 2002 through fiscal year 2005 for EDS and ETD maintenance. In fiscal year 2006, TSA estimates it will spend $199 million and has projected it will spend $234 million in fiscal year 2007. TSA was unable to provide us data on maintenance cost per machine prior to fiscal year 2005 because, according to TSA officials, its previous contract with Boeing Service Company (Boeing) to maintain EDS and ETD machines was not structured to capture these data. TSA did not provide us with projections of EDS and ETD maintenance costs beyond fiscal year 2007, although TSA has negotiated maintenance prices per machine through fiscal year 2009. TSA was mandated to screen all checked baggage using explosive detection systems at airports by December 31, 2003.1 Explosive Detection Systems (EDS) use computer-aided tomography X-rays to recognize the characteristics of explosives. In general, EDS are used for checked baggage screening. Explosive Trace Detection (ETD) machines use chemical analysis to detect traces of explosive material vapors or residues. ETD machines are used for both passenger carry-on baggage and checked baggage screening. According to TSA budget documents, TSA will have deployed over 1,400 EDS and 6,600 ETD machines at baggage screening locations in over 400 airports nationwide by the end of fiscal year 2006. The Aviation and Transportation Security Act, Pub. L. No. 107-71 § 110(b), 115 Stat. 597, 615 (2001) mandated, among other things, that all checked baggage at U.S. airports be screened using explosive detection systems by December 31, 2002. Section 425 of the subsequently enacted Homeland Security Act of 2002, Pub. L. No. 107-296, 116 Stat. 2135, 2185-86, in effect, extended this mandate to December 31, 2003. See 49 U.S.C. § 44901(d). TSA is responsible for the EDS and ETD maintenance costs EDS and ETD maintenance includes preventative maintenance—scheduled activities to increase machine reliability that are performed monthly, quarterly, and yearly based on the contractors’ maintenance schedules and corrective maintenance—actions performed to restore machines to operating condition after failure. A TSA official told us that typical EDS warranties are for one year and that ETD warranties are for 2 years. From June 2002 through March 2005, Boeing was the prime contractor for the installation and maintenance of EDS and ETD machines at over 400 U.S. airports. TSA officials stated that the Boeing contract was awarded at a time when TSA was a new agency with many demands and extremely tight schedules for meeting numerous congressional mandates related to passenger and checked baggage screening. Boeing had a cost reimbursement contractwith TSA, which was competitively bid and contained renewable options to 2007. Firm fixed price contracts provide for a price that is not subject to any adjustment on the basis of the contractor’s cost experience in performing the contract. This contract type places upon the contractor maximum risk and full responsibility for all costs and resulting profit and loss. It provides maximum incentive for the contractor to control costs and perform effectively and imposes a minimum administrative burden upon the contracting parties. In March 2005, TSA signed firm fixed price contracts for EDS and ETD maintenance. TSA awarded a competitively bid contract to Siemens to provide maintenance for ETD machines. TSA negotiated sole source contracts with L-3 and GE InVision because they are the original equipment manufacturers and owners of the intellectual property of their respective EDS. TSA can exercise 4 1-year options on all three contracts through March 2009. In September 2005, TSA awarded a competitively bid firm fixed price contract to Reveal Imaging Technologies, Inc., (Reveal) for both the procurement and maintenance of a reduced-size EDS. According to TSA budget documents, TSA has obligated almost $470 million for EDS and ETD maintenance from fiscal years 2002 through 2005. EDS and ETD Machine Maintenance Budget Amounts, Fiscal Years 2002 through 2007 (In millions) maintenance grew from $14 million in fiscal year 2002 to an estimated $199 million in fiscal year 2006. In fiscal year 2007, TSA projects it will spend $234 million. Appropriated (as revised) 75 100 205 200 234 the maintenance cost per machine prior to fiscal year 2005 because, according to TSA officials, its previous contract with Boeing to maintain EDS and ETD machines was not structured to capture these data. According to TSA officials, in fiscal year 2004, TSA requested and received approval to reprogram about $32 million due to higher levels of maintenance costs than expected. In fiscal year 2005, TSA requested and received approval to reprogram $25 million to fund the L-3 contract ($16.6 million) and to closeout the Boeing contract ($8.4 million), which has yet to be closed. TSA officials did not provide us with projections of costs beyond 2007. However, current contracts have negotiated maintenance prices per machine through March 2009, if TSA decides to exercise option years in the contracts. Future EDS and ETD maintenance costs depend on decisions made as outlined in a February 2006 TSA strategic planning framework for screening checked baggage using EDS and ETD. Among other things, the plan discusses options for the deployment of new technologies and refurbishment of existing equipment. Different factors have played a role in costs to date and will influence future maintenance costs for EDS According to a September 2004 DHS OIG report, TSA did not follow sound contracting practices in administering the Boeing contract, which was primarily for the installation and maintenance of EDS and ETD machines. Among other things, the DHS OIG found that TSA had paid provisional award fees totaling $44 million through December 2003 without any evaluation of Boeing’s performance. GAO has identified similar instances of agencies’ failure to properly use incentives in making award fees. See GAO, Defense Acquisitions: DOD Has Paid Billions in Award and Incentive Fees Regardless of Acquisition Outcomes, GAO-06-66 (Washington, D.C.: December 2005). For EDS contracts, future labor and material costs could not be determined, so TSA negotiated an escalation factor to be used to determine pricing for the contract option years. For the ETD contracts, TSA determined after a review of cost data, that it would apply a 4 percent escalation factor to prices in the contract option years. The employment cost index is a measure of the change in the cost of labor, free from the influence of employment shifts among occupations and industries. The consumer price index is a measure of the average change in prices over time of goods and services purchased by households. machines deployed and out of warranty, conditions under which machines operate, mean downtime requirements, the emergence of new technologies or improved equipment, and alternative screening strategies. TSA’s February 2006 strategic plan framework for screening checked baggage over the next 20 years discusses factors that may impact future maintenance costs. For example, the framework discusses the refurbishment of existing machines and the deployment of new technologies, but does not outline the number of machines or specific time frames for implementation. Additionally, the impact of these strategies on future maintenance costs is unknown.1 If no new equipment or maintenance providers emerge, TSA may pay a premium in future sole source contracts where intellectual property rights are involved. For example, because L-3 and GE InVision had intellectual property rights on their machines, their maintenance contracts were not bid competitively and therefore prices were not subject to the benefits of market forces. TSA issued its strategic plan framework for screening checked baggage using EDS and ETD machines in response to various congressional mandates, congressional committee directives, and GAO recommendations. Siemens, L-3, and GE contracts before the maintenance contracts were executed and, as a result, TSA did not have a complete picture of all maintenance costs. In August 2005, TSA hired a contractor to define parameters for a lifecycle cost model. A TSA official told us that the contractor began work on a lifecycle cost model for EDS in February 2006 and did not know when the model would be completed. for up to five years if TSA exercises options to 2009. TSA did not provide per-unit maintenance costs prior to March 2005 because the Boeing contract was not structured to capture these data. ETD Smiths Ionscan 400A Smiths Ionscan 400AE Smiths Ionscan 400B Thermo EGIS 3000 Thermo EGIS II GE Iontrack Itemiser-W 241 5 3,038 2 425 2,302 10,525 10,525 8,580 12,899 13,134 $ 7,727 336 6 3,035 2 545 2,322 10,974 10,974 8,946 13,526 13,695 $ 8,057 NOTE: Maintenance costs represent the negotiated prices in the maintenance contracts for EDS and ETD machines. TSA included several contractor performance requirements in the Siemens, L-3, GE InVision, and Reveal contracts. Metrics related to Reliability, Maintainability, and Availability (RMA) of the machines must be reported to TSA.1 Specific cost data related to maintenance and repair must be reported to TSA. Contractors are required to meet monthly with TSA to review all pertinent technical, schedule, and cost aspects of the contract, including an estimate of the work to be accomplished in the next month; performance measurement information; and any current and anticipated problems. Includes metric s such as m ean tim e between failur es (gener ally the total time a m achi ne is av ailable to perform its required missi on divided by the num ber of failur es over a given per iod of time) and oper ational availability (generally the percentage of time, duri ng operational hours, that a machine is available to perform its requir ed mission). Such reliability, maintainability, and availability data ar e standard and appropri ate perform anc e requirements for maintenanc e contracts. Provisions in the L-3 and GE InVision contracts specify that the agreed price for maintaining EDS will be paid only if the contractor performs within specified mean downtime (MDT) requirements. MDT is calculated by the number of hours a machine is out of service in a month divided by the number of times that machine is out of service per month. Contractors submit monthly invoices for 95 percent of the negotiated contract price for the month and then submit an MDT report to justify the additional 5 percent. Consequently, if the contractor fails to fulfill the MDT requirements, it is penalized 5 percent of the negotiated monthly maintenance price. As of February 2006, neither GE InVision nor L-3 have been penalized for missing their MDT. The allowable MDT is lowered from 2005 to subsequent renewable years in the contract, as shown in the table below. TSA’s acquisition policiesand GAO’s standards for internal controlsTSA officials provided no evidence that they are reviewing maintenance cost data provided by the contractor because they are not required to document such activities. For example, even though TSA officials told us they are reviewing required contractor data, including actual maintenance costs related to labor hours, costs associated with replacement parts, and the costs of shipping machine parts, they did not have any documentation to support this. TSA officials told us that they have begun to capture these data to assist them in any future contract negotiations. TSA officials provided no evidence that performance data for corrective and preventative maintenance required under the contract is being reviewed. TSA officials told us that they perform such reviews, but do not document their activities since there are no TSA policies or procedures requiring them to do so. Therefore, TSA could not provide assurance that contractors are complying with contract performance requirements. For example, although TSA documents monthly meetings with contractors to discuss performance data, TSA did not provide evidence that it independently determines the reliability and validity of data required by the contracts, such as mean time between failures and mean time to repair, which are important to making informed decisions about future purchases of EDS and ETD equipment and their associated maintenance costs. GAO/AIMD-00-21.3.1. For EDS contracts with possible financial penalties, TSA officials told us that they review contractor-submitted mean downtime data on a monthly basis to determine the reliability and validity of the data and to determine whether contractors are meeting contract provisions or should be penalized. However, TSA officials said they do not document these activities because there are no TSA policies or procedures to do so. As a result, without adequate documentation, there is no assurance on whether contractors are meeting contract provisions that TSA has ensured that it is making appropriate payments for services provided. TSA’s move to firm fixed price maintenance contracts was advantageous to the government in that it helps control present and future maintenance costs. Firm fixed price contracts also help ensure price certainty and therefore are more predictable. Unresolved issues remain with the past contractor, specifically fees awarded to former contractor Boeing that may have been excessive due to a lack of timely evaluation of the contractor’s performance. Although TSA has begun to develop a lifecycle cost model in order to control costs and negotiate future contracts, TSA has not set a timeframe to complete this model. Without such a time frame, TSA may not be identifying cost efficiencies and making informed procurement decisions. Further, TSA must provide evidence of its reviews and analyses of contractor-submitted data and perform analyses of contractor data to determine the reliability and validity of the data and to provide assurance of contractor compliance with contract performance requirements and internal control standards. Without stronger oversight, TSA will not have reasonable assurance that contractors are performing as required and that full payment is justified based on meeting mean downtime requirements. strengthen oversight of contract performance, we recommend that the Secretary of Homeland Security instruct the Assistant Secretary, Transportation Security Administration to take the following three actions report to the congressional appropriations committees on its actions, including any necessary analysis, to address the DHS OIG recommendation to recover any excessive fees awarded to Boeing; establish a time line for completing a lifecycle cost model for EDS, which TSA revise its policies and procedures to require documentation of its monitoring of EDS and ETD maintenance contracts to provide reasonable assurance that contractor maintenance cost data and performance data are recorded and reported in accordance with TSA contractual requirements and self-reported contractor mean downtime data are valid, reliable, and justify the full payment of the contract amount. TSA reviewed these slides in their entirety and provided several technical comments, which we incorporated as appropriate. TSA officials told us that they are not making formal comments on our recommendations. In addition to the individual names above, Charles Bausell, R. Rochelle Burns, Glenn Davis, Katherine Davis, Michele Fejfar, Richard Hung, Nancy Kawahara, Dawn Locke, Thomas Lombardi, Robert Martin, and William Woods. | Mandated to screen all checked baggage by using explosive detection systems at airports by December 31, 2003, the Transportation Security Administration (TSA) has deployed two types of screening equipment: explosive detection systems (EDS), which use computer-aided tomography X-rays to recognize explosives, and explosive trace detection (ETD) systems, which use chemical analysis to detect explosive residues. This report discusses (1) EDS and ETD maintenance costs, (2) factors that played a role in these costs, and (3) the extent to which TSA conducts oversight of maintenance contracts. GAO reviewed TSA's contract files and processes for reviewing contractor cost and performance data. TSA obligated almost $470 million from fiscal years 2002 through 2005 for EDS and ETD maintenance, according to TSA budget documents. In fiscal year 2006, TSA estimates it will spend $199 million and has projected it will spend $234 million in fiscal year 2007. TSA was not able to provide GAO with data on the maintenance cost per machine before fiscal year 2005 because, according to TSA officials, its previous contract with Boeing to install and maintain EDS and ETD machines was not structured to capture these data. Several factors have played a role in EDS and ETD maintenance costs. According to a September 2004 Department of Homeland Security's Office of Inspector General report, TSA did not follow sound contracting practices in administering the contract with Boeing, and TSA paid provisional award fees totaling $44 million through December 2003 without any evaluation of Boeing's performance. TSA agreed to recover any excessive award fees paid to Boeing if TSA determined that such fees were not warranted. In responding to our draft report, DHS told us that TSA and Boeing had reached an agreement in principle on this matter and that documentation was in the approval process with closure anticipated in July 2006. Moreover, TSA did not develop life-cycle cost models before any of the maintenance contracts were executed and, as a result, TSA does not have a sound estimate of maintenance costs for all the years the machines are expected to be in operation. DHS also stated in its comments on our draft report that a TSA contractor expected to complete a prototype life-cycle cost model by September 2006 and that TSA anticipated that the EDS model would be completed 12 months after the prototype was approved. Without such an analysis, TSA may not be identifying cost efficiencies and making informed procurement decisions on future purchases of EDS and ETD machines and maintenance contracts. TSA has taken actions to control costs, such as entering into firm-fixed-price contracts for maintenance starting in March 2005, which have advantages to the government because price certainty is guaranteed. Further, TSA incorporated standard performance requirements in the contracts including metrics related to machine reliability and monthly performance reviews. For EDS contractors, TSA has specified that the full agreed price would be paid only if mean downtime (i.e., the number of hours a machine is out of service in a month divided by the number of times that machine is out of service per month) requirements are met. Although TSA has policies for monitoring contracts, TSA officials provided no evidence that they are reviewing required contractor-submitted performance data, such as mean downtime data. TSA officials told GAO that they perform such reviews, but do not document their activities because there are no TSA policies and procedures requiring them to do so. As a result, without adequate documentation, TSA does not have reasonable assurance that contractors are performing as required and that full payment is justified based on meeting mean downtime requirements. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The CFO Act of 1990 requires DOD and other agencies covered by the act to improve their financial management and reporting operations. One of its specific requirements is that each agency CFO develop an integrated agency accounting and financial management system, including financial reporting and internal controls. Such systems are required to comply with applicable principles and standards and provide for complete, reliable, consistent, and timely information needed to manage agency operations. Beginning with fiscal year 1991, the CFO Act required agencies, including the Navy, to prepare financial statements for their trust and revolving funds, and for their commercial activities. The CFO Act also established a pilot program under which the Army and Air Force, along with eight other federal agencies or components, were to test whether agencywide audited financial statements would yield additional benefits. The Congress concluded that agencywide financial statements contribute to cost-effective improvements in government operations. Accordingly, the Government Management Reform Act of 1994 made the CFO Act’s requirements for annual audited financial statements permanent and expanded it to include virtually the entire executive branch. Under this legislative mandate, DOD is to annually prepare and have audited DOD-wide and component financial statements beginning with fiscal year 1996. The Office of Management and Budget (OMB) has designated Navy and the other military services as “components” that will be required to prepare financial statements and have them audited. Because the Navy was not one of the pilot agencies, fiscal year 1996 was the first year for which it was required to prepare agencywide financial statements for its general funds. In October 1990, the Federal Accounting Standards Advisory Board (FASAB) was established by the Secretary of the Treasury, the Director of OMB, and the Comptroller General to consider and recommend accounting standards to address the financial and budgetary information needs of the Congress, executive agencies, and other users of federal financial information. Using a due process and consensus-building approach, the nine-member Board, which, since its formation has included a member from DOD, recommends accounting standards for the federal government. Once FASAB recommends accounting standards, the Secretary of the Treasury, the Director of OMB, and the Comptroller General decide whether to adopt the recommended standards. If they are adopted, the standards are published as Statements of Federal Financial Accounting Standards (SFFAS) by OMB and by GAO. In addition, the Federal Financial Management Improvement Act of 1996, as well as the Federal Managers’ Financial Integrity Act, requires federal agencies to implement and maintain financial management systems that will permit the preparation of financial statements that substantially comply with applicable federal accounting standards. For fiscal year 1996, the Navy prepared two separate sets of statements: one for its operations financed with general funds and another for operations financed using funds provided through the Defense Business Operations Fund (DBOF). The Defense Finance and Accounting Service-Cleveland Center supported the Navy in preparing the fiscal year 1996 financial statements for activities financed by general funds and DBOF. The Navy’s general fund financial statements encompassed those operations financed through 24 general fund accounts. These general funds included moneys the Congress appropriated to the Navy to pay for related authorized transactions for periods of 1 year, multi-years, or on a “no-year” basis. The Navy’s DBOF business activities are financed primarily through transfers from the Navy’s Operations and Maintenance appropriations, based on the costs of goods and services to be provided. The Navy has historically operated many supply and industrial facilities using a working capital fund concept. In fiscal year 1996, the Navy’s business activities comprised the largest segment of DOD’s support operations financed through DBOF. The DOD Inspector General delegated responsibility for auditing Navy’s fiscal year 1996 financial statements to the Naval Audit Service. By agreement with the DOD Inspector General, the Naval Audit Service’s fiscal year 1996 audit encompassed two separate efforts, both limited to the Navy’s Statement of Financial Position and related footnotes. The audit resulted in one set of reports focused on the Navy’s financial statement reporting for its operations financed using general funds and one overall report summarizing the results of its review of the Navy’s DBOF-financed operations. The set of general fund reports included an overall auditor’s opinion report, an overall report on internal controls and compliance with laws and regulations, and eight other more detailed supporting reports. Appendix I shows the status of Navy entities’ financial statement audits in fiscal year 1996. Appendix II provides a complete listing of the Naval Audit Service reports issued as a result of its fiscal year 1996 financial statement audit efforts. The objectives of this report were to (1) analyze the extent to which financial deficiencies detailed in the auditors’ reports may adversely impact the ability of Navy and DOD managers and congressional officials to make informed programmatic and budgetary decisions, (2) provide examples of other issues of interest to budget and program decisionmakers that can be identified by reviewing the financial statements, and (3) describe the additional financial data that, if complete and accurate, could be used to support future decision-making when the Navy implements accounting standards that are effective beginning with fiscal years 1997 and 1998. To accomplish these objectives, we obtained and analyzed the Naval Audit Service’s opinion report and other supporting reports resulting from its examination of the Navy’s fiscal year 1996 financial statements to identify data deficiencies and determine their actual or potential impact on Navy programmatic or budgetary decision-making. To do this, we compared the Naval Audit Service’s audit results with the findings and related open recommendations in our previous reports that discuss the implications of Navy’s financial deficiencies. We also obtained additional details on the Naval Audit Service’s findings through discussions with cognizant Naval Audit personnel, and we discussed the status of our previous findings and recommendations with cognizant Navy and DFAS personnel. Further, we independently reviewed Navy’s financial statements to identify other issues of interest to budget and program decisionmakers, particularly those areas that may indicate the need for future budget resources or that may provide the opportunity to reduce resource requirements. Finally, we analyzed recently adopted federal accounting standards to identify areas where Navy program and budget managers will have additional useful information available to support decision-making, if the standards are effectively implemented as required. Our work was conducted from December 1997 through February 1998 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of Defense or his designee. On March 9, 1998, the Principal Deputy Under Secretary of Defense (Comptroller) provided us with written comments, which are discussed in the “Agency Comments and Our Evaluation” section and are reprinted in appendix III. To an even greater extent than the other military services, the Navy has been plagued for years by troublesome financial management problems involving billions of dollars. For example, our 1989 report on the results of our examination of Navy’s fiscal year 1986 financial reporting detailed numerous problems, such as understating the value of Navy’s assets by $58 billion, that we attributed to carelessness and the failure to perform required rudimentary supervisory reviews and reconciliations. Seven years later, we found that such problems persisted. In our reporton the Navy’s fiscal year 1994 financial reporting, we reported that the Navy had not taken advantage of the 5 years that had passed since the enactment of the CFO Act or the experiences of its counterparts, the Army and the Air Force, in preparing financial statements. Our report identified a minimum of $225 billion of errors in the $506 billion in assets, $7 billion in liabilities, and $87 billion in operating expenses reported to the Department of the Treasury in the Navy’s fiscal year 1994 consolidated financial reports. Consequently, we concluded that the Navy and DFAS had to play “catch up” if they were to successfully prepare reliable financial statements on the Navy’s operations. Most recently, the Naval Audit Service’s April 1997 report on the results of its audit of the Navy’s fiscal year 1996 financial reporting disclosed that errors, misstatements, and internal control weaknesses continued. A number of the financial data and control deficiencies disclosed in the Naval Audit Service’s reports not only adversely affect the reliability and usefulness of the Navy’s financial reporting but also have significant programmatic or budgetary implications. Our analysis of the auditors’ reports, along with additional examples from our own audit work, is provided in the following sections. The Naval Audit Service report on the results of its financial audit of the Navy’s fiscal year 1996 financial statements disclosed numerous problems with inventory data reported by the Navy, including the following. “The Department of the Navy did not report an estimated $7.8 billion in Operating Materials and Supplies items aboard ships or with Marine Corps activities on the FY 1996 Statement of Financial Position.” We previously reported that DOD has spent billions of dollars on inventory that is not needed to support war reserve or current operating requirements and burdened itself with managing and storing the unneeded inventory. The financial reporting error disclosed by the Naval Audit Service has implications for the budget process because the inventory data used both for the financial statements and as the starting point for the Navy’s process to develop budget requests for additional inventory are incomplete. A Stratification Report is used to prepare data on the quantity and value of the Navy’s inventories, such as operating materials and supplies, included in the Navy’s financial statements. It is also used as the starting point to forecast budget requirements for inventories that will be needed in supply warehouses. To determine Navy-wide inventory requirements, responsible managers must also have accurate, reliable information on the quantities of inventories on ships, including any quantities in excess of needs. However, the auditors found that information on $7.8 billion in inventories, including those on board ships was not included in the Navy’s year-end financial statements. This lack of Navy-wide visibility over inventories substantially increased the risk that Navy may have requested funds to obtain additional unnecessary inventories because responsible managers did not receive information that excess inventories were already on hand in other locations. This happened in the past, as discussed in our report on financial audit work we performed to help the Navy prepare for the fiscal year 1996 audit. We found that for fiscal year 1994, the Navy’s inventory item managers did not have adequate visibility over $5.7 billion in operating materials and supplies on board ships and at 17 redistribution sites. Approximately $883 million of these inventories were excess to current operating allowances or needs. For the first half of fiscal year 1995, inventory item managers had ordered or purchased items for some locations that had been identified as excess at other locations and thus were already available. As a result, we identified unnecessary spending of at least $27 million. Further, a review of inventory item managers’ forecasted spending plans for the second half of fiscal year 1995 and fiscal years 1996 and 1997 found planned purchases of items already available in excess at other locations could result in the Navy incurring approximately $38 million of unnecessary costs. Our recent discussions with Navy officials confirmed that as of December 1997, the process used to accumulate inventory status information still did not provide inventory managers complete information on operating material and supplies inventories, particularly information on the quantities of Navy operating and supply inventories on ships. As a result, the Navy’s budget requests for inventory may continue to not accurately reflect its needs. The Naval Audit Service’s fiscal year 1996 audit report stated the following. “The Department of the Navy could not effectively account for the balance in the Fund Balance with Treasury because Defense Finance and Accounting Service - Cleveland Center had not developed an adequate accounting system to do so. Consequently, the Department of the Navy cannot provide reasonable assurance that: (1) the $64.8 billion account balance reported on the FY 1996 Statement of Financial Position presents fairly its financial position, or (2) transactions that could cause Antideficiency Act violations would be detected as required by Department of Defense guidance. Defense Finance and Accounting Service principally used Department of the Treasury data in reporting the Fund Balance with Treasury because the data was considered more reliable than the data provided by the Navy’s accounting systems. Department of Defense guidance requires that the Fund Balance with Treasury be supported by records of the entity.” This situation is similar to an individual not being able to reconcile his or her checkbook register to the monthly statement received from the bank. Just as with an individual’s checkbook, reconciliations are necessary to ensure that any differences are identified, the cause researched, and appropriate corrective action taken. Such reconciliations allow the individual to identify not only clerical errors but potential fraudulent misuse of his or her account. For example, blank checks can be stolen and forged and the amounts on otherwise legitimate checks can be altered. The potential consequences of the lack of regular reconciliations is increased dramatically for the Navy given that the agency reported $63 billion in fiscal year 1996 general fund expenditures and also has had continuing problems in properly recording billions of dollars of transactions. The lack of complete records for all disbursements and regular reconciliations can also result in the Navy spending more funds than it has available. Federal agencies are required to record obligations as legal liabilities are incurred and make payments from the associated appropriations within the limitations established by the Congress. To the extent that the Navy does not properly record all its disbursements, its ability to ensure that it will have enough funding available to pay for its expenses will continue to be adversely affected. This is similar to an individual not properly maintaining his or her checkbook register by neglecting to record checks written and, at the end of the month, finding that the account is now overdrawn. As noted by the auditors, the lack of controls over the Fund Balance with Treasury may result in Antideficiency Act violations. In addition, in our March 1996 report, we disclosed that problems in keeping records on Navy’s disbursements resulted in understating by at least $4 billion the federal government’s overall budget deficit reported as of June 30, 1995. In the current environment, such errors could make the difference between the federal government reporting a budget deficit or surplus. The extensive problems identified in the Navy’s disbursement process also resulted in erroneous and duplicate payments to vendors, as stated in the auditors’ report. “Defense Finance and Accounting Service Operating Locations processed 110 duplicate or erroneous vendor payments for the Department of the Navy. Of these, 62, valued at $2.5 million, had not been previously identified for collection....The improper payments were the result of input errors, failure to conduct reviews, ambiguous reports, and improper processing of invoices....The $2.5 million in duplicate or erroneous payments we identified and the Operating Locations collected represent funds that can be put to better use.” The auditors’ findings were based on a limited judgmental sample of about 400 payments out of a universe of about 1.2 million payments Navy made during fiscal year 1996. DOD officials informed us that subsequent investigation showed that not all of the $2.5 million were actually duplicate or erroneous payments that could be put to better use. However, the Naval Audit Service has not yet validated these results. Nonetheless, the control weaknesses identified, along with our previous work on DOD’s long-standing problems with overpayments to contractors and vendors, suggest that significant additional, undetected erroneous payments likely exist. Most recently, we reported that for fiscal years 1994 through 1996, contractors returned checks to DFAS totaling about $1 billion a year. These related to payments from the Navy, the other military services, and other Defense agencies. For the first 7 months of fiscal year 1997, DFAS’s Columbus Center received checks returned by contractors totaling about $559 million. DOD’s reliance on contractors to identify these overpayments substantially increases the risk that it is incurring unnecessary and erroneous costs. Because of our continuing concerns with control breakdowns in the contract payment area across the department, we have continued to monitor this area as one of the high-risk federal areas most vulnerable to waste, fraud, abuse, and mismanagement. By establishing DBOF in 1991, the Department of Defense intended to focus management attention on the total costs of its businesslike support organizations to help manage these costs more effectively. DBOF was modeled after businesslike operations in that it was to maintain a buyer-seller relationship with its military customers, primarily the Navy and the other military services. DBOF-funded operations were to operate on a break-even basis by recovering the current costs incurred in conducting its operations, primarily from operations and maintenance funding provided by its customers. The Naval Audit Service reported a number of serious financial deficiencies in its fiscal year 1996 review of Navy’s DBOF activities. “nternal controls were not adequate to detect or prevent errors. For example, inventory records were inaccurate; fixed assets were not capitalized or depreciated properly; depreciation on fixed assets at closing activities was not included on financial statements; payables were not always processed accurately or timely; accruals were inaccurate because of lack of reconciliations; liabilities were inaccurate because of untimely processing and bookkeeping errors; and Military Sealift Command financial accounting information was inaccurate due to inadequate general ledger and subsidiary ledger controls and accounting records.” The following examples of data deficiencies, when considered along with the Naval Audit Service’s overall assessment of material weaknesses in the Navy’s DBOF operations, have an adverse effect on the Navy’s ability to reliably determine DBOF’s net operating results. These financial deficiencies adversely affect not only the Navy’s DBOF financial reporting but also its ability to achieve the goal of operating on a break-even basis. Reliable information on the DBOF’s net operating results is a key factor in setting the prices DBOF charges its customers. As a result of the problems pointed out by the Naval auditors, neither DOD nor congressional officials can be certain (1) of actual DBOF operating results and (2) if the prices DBOF charges its customers are reasonable for the goods and services provided. Our recent reporting demonstrates the Navy’s continuing problems in achieving the goal of operating its businesslike activities on a break-even basis. For example, in March 1997, we reported that DBOF management’s inability to stem continuing losses occurred as a result of, among other factors, inaccurate accounting information concerning the Fund’s overhead costs. More recently, in an October 1997 report, we determined that because one of the Navy’s DBOF business areas did not require its customers to pay for all storage services provided its customers—as is the common practice in most businesslike operations—customers had no incentive to either relocate or dispose of unneeded ammunition and thereby reduce their costs. To the extent that the Navy’s DBOF operations incur losses, future appropriations may be required to cover those losses. DOD officials informed us that they used these financial statements and related audit report findings in their efforts to reduce costs and streamline the Navy’s ordnance business area. Specific examples of problems identified by Naval Audit Service auditors in its fiscal year 1996 financial review of the Navy DBOF included the following. A sample comparison of inventory records and on-hand stock revealed that quantities actually in storage differed from inventory records about 22 percent of the time. The auditors reported that management took action to correct the data deficiencies it reported and that action was underway to correct the systemic causes for the discrepancies indentified. In discussing the possible implications of its findings, the Navy auditors reported that “Inaccurate inventory records distort financial records and financial reports used by senior managers. This, in turn, can result in decisions to buy wrong quantities, which could cause excesses or critical shortages of material.” Depreciation expenses associated with fixed assets at one location were understated by a net amount of about $5 million. This occurred primarily because of a misinterpretation of guidance on reporting depreciation expenses incurred during the year on assets that were to be transferred from that location before the end of the fiscal year. While it did not quantify the extent of depreciation expense understatements, the Naval Audit Service also reported that additional reviews revealed that at least eight other locations also misinterpreted the guidance. In reporting on the implications of this deficiency, the Naval Audit Service stated, “Failure to report depreciation at closing activities understates current year costs and prior year losses that could be eligible for recoupment from Operation and Maintenance, Navy funds . . . . Ultimately, costs that are not recouped will have a direct effect on the cash position of the Department of the Navy Defense Business Operations Fund.” This means that to the extent that the Navy was undercharged as a result of the depreciation understatement, the Navy would have more Operation and Maintenance funds available than it should. The Navy’s DBOF maintained over 2,300 flatracks (containers used to transport Army cargo on Navy ships) solely for the benefit of the Army but did not recover the related estimated costs. The auditors reported that the costs to maintain these flatracks “should have been funded by Operation and Maintenance, Army funds. As a result of the failure to collect reimbursement, the Department of Navy used Operation and Maintenance, Navy funds to support the Army requirements. The funds used were estimated to be $640,000 for Fiscal Year 1997, and taking corrective action could result in the Department of the Navy putting $4.1 million to better use over a 6-year period.” Although this situation did not affect the federal government’s overall financial position, this means that the Navy augmented Army budgetary resources by paying for a service that should have been paid with Army funds. The Navy’s DBOF accounting records included at least $5.8 million in invalid “Other Non-Federal (Governmental) Liabilities.” The auditors reported that “Invalid liabilities cause funds to be unnecessarily set aside either to pay invoices already paid or to plan for costs not yet incurred. Therefore, this $5,793,496 represents potential funds that can be put to better use.” This means that the Navy’s operation and maintenance appropriation requirements are less than previously recognized because the Navy will not be required to pay these “invalid liabilities.” Despite the shortcomings in the Navy’s financial statements, we were able to identify several financial issues that may be of interest to budget and program managers. Specifically, even with the acknowledged deficiencies in the Navy’s financial data, some areas raise questions about whether future budget resources may be needed or whether there may be opportunities to reduce resource requirements. The following are examples of footnote disclosures and the kind of information that can be gleaned from them. Figure 1 provides excerpts from the note intended to explain how the accounts receivable balance presented on the Statement of Financial Position was calculated. Accounts receivable, which represents amounts owed the Navy, is significant to program managers and budget officials. If the amount is overstated, the Navy may not receive amounts that it intended to use to support its operations and may therefore need to obtain additional funding. If the amount is understated, the Navy may lack the visibility necessary to ensure that it is taking appropriate action to collect all amounts due it. For example, the table shows a 14.5 percent allowance for appropriation 1453 (military personnel). This means that nearly 15 percent of the funds Navy personnel owed the Navy were not likely to be collected. In some cases, better and more timely collection of these types of receivables may result in the recovery of amounts that could be used to reduce the Navy’s request for funds to support its military personnel or provide funds to meet other critical resource needs. The note also refers to negative governmental non-entity receivables of $26.7 million. A negative receivable is an unusual disclosure, indicating that the Navy does not know the source of almost $27 million it collected. These funds cannot be used until the source of the collection is determined. If these collections are owed the Navy, recording them improperly and not taking timely action to collect these amounts may have resulted in requests for budgetary resources when these collections could have been used to meet those requirements. Figure 2 shows excerpts from the note that provides information on over $4 billion of cancelled appropriations that the Navy reopened in fiscal year 1996. The note does not clearly indicate how much or for what purpose the cancelled accounts were used. The Congress has long-standing concerns with agencies’ use of funds after their expiration. In 1990, the Congress determined DOD was expending funds from expired accounts without sufficient assurance that authority for such expenditures existed or in ways that the Congress did not intend. To end these abuses, the Congress enacted account closing provisions in the fiscal year 1991 National Defense Authorization Act. The act closes appropriations 5 years after the expiration of their availability for obligation. Once closed, the appropriations are not available for obligation or expenditure for any purpose. In a series of decisions, the Comptroller General has stated, however, that agencies may adjust their accounting records for closed appropriations to record transactions that occurred but were not recorded before closure and to correct obvious clerical mistakes within a reasonable period of time after closure. For example, if an agency discovers, after an appropriation closes, that it had failed to record a disbursement that it had properly made from an appropriation before closure, the agency is expected to adjust its accounting records to reflect that disbursement. Further details would be necessary to assess the implications of the Navy’s note regarding the “reopening” of $4 billion in cancelled appropriations. This information may be related to the Navy’s continuing problems in accounting for its disbursements and may indicate a weakening in the mechanism put in place by the Congress to ensure control over cancelled appropriations. Navy’s fiscal year 1996 Statement of Financial Position includes about $61 billion in “Unexpended Appropriations.” Note 1R of the financial statements defines unexpended appropriations as “amounts of authority which are unobligated and have not been rescinded or withdrawn and amounts obligated but for which neither legal liabilities for payments have been incurred nor actual payments made.” Note 20, as shown in figure 3, disclosed that at the end of fiscal year 1996, Navy had an unobligated balance available of about $13 billion and about $45 billion in undelivered orders, which represent amounts obligated but not expensed. These amounts, along with the $3 billion in unavailable unobligated appropriations included in the note, tie back to the $61 billion reported in the financial statements. This type of information, along with other required disclosures, could serve as a key indicator of how well the Navy is managing the funds provided by the Congress. A portion of the amounts identified as unexpended appropriations relate to funding provided through procurement or other appropriations that are available for obligation for more than 1 year to fund Navy activities. However, this information, along with other required disclosures, can be used to monitor the Navy’s long-standing problems in fully utilizing its resources. For example, OMB requires that agencies disclose the amount of unexpended cancelled appropriations in the note on contingent liabilities. Although the Navy’s fiscal year 1996 financial statement reporting did not include this information, the Navy’s year-end reports to the Treasury state that the Navy cancelled $1.8 billion and $1.5 billion in unexpended appropriations for fiscal years 1996 and 1997, respectively. Also, the Naval Audit Service has issued several reports that highlighted the Navy’s ongoing problems in promptly deobligating unneeded funds that could be better utilized for critical Navy mission needs. In addition, beginning in fiscal year 1998, the Navy will be required to prepare a Statement of Budgetary Resources, which will provide decisionmakers with added information on the status of the Navy’s use of its resources. Although Navy officials represented their fiscal year 1996 financial statements—the first-ever attempt to prepare comprehensive financial statements for the Navy—to be based on the best information available, the usefulness of Navy’s financial statement disclosures is limited at best due to the previously discussed problems with accuracy, reliability, and completeness. The footnotes to the Navy’s financial statements, which should serve as an excellent source of relevant, detailed information on its operations, are lacking in detail and present abnormal information. For example, the statements included a number of footnotes that provided only summary charts or tables or grossly abnormal balances, such as large negative balances in what would normally be expected to be accounts with positive balances, without any accompanying detail or explanation. In addition, because fiscal year 1996 was a first-year effort, the Navy’s general fund financial statements do not offer the benefit of comparative data on the prior year, which can provide useful analysis on trends and changes from year to year. As the Navy and DFAS improve on their first-year efforts to develop reliable financial statements for the Navy, and when the problems identified in the auditors’ reports are corrected, knowledgeable users of the Navy’s financial statements will be better able to identify key issues that may be of interest to budget and program managers. Recently adopted federal accounting standards are intended to enhance federal financial statements by requiring that government agencies show the complete financial results of their operations and provide relevant information on agencies’ true financial status. In addition to the new requirement for the Statement of Budgetary Resources previously mentioned, two other recently adopted accounting standards are particularly significant in terms of the additional information that could be made available to Navy budget and program managers in the future, if the standards are implemented effectively. Specifically, the standards call for reporting on the Navy’s costs associated with (1) the disposal of various types of assets, including environmental clean-up costs, and (2) deferred maintenance. Issued in December 1995 and effective beginning with fiscal year 1997, Statement of Federal Financial Accounting Standard (SFFAS) No. 5, Accounting for Liabilities of the Federal Government, requires the recognition of a liability for any probable and measurable future outflow of resources arising from past transactions. The statement defines probable as that which is likely to occur based on current facts and circumstances. It also states that a future outflow is measurable if it can be reasonably estimated. Because disposal costs are both probable and measurable, they are to be reported under SFFAS No. 5. The Congress has recognized the importance of accumulating and considering disposal cost information. In the National Defense Authorization Act for Fiscal Year 1995, the Congress required DOD to develop life-cycle environmental costs, including demilitarization and disposal costs, for major defense acquisition programs. This means that the Navy is required to estimate and report, as part of the information presented in its financial statements, the estimated cost to dispose of its major weapon systems and the cost to clean up the environmental hazards found on its land and facilities. In our recent report on DOD’s efforts to implement the new reporting requirements as they relate to the disposal of nuclear submarines and ships, we stated that this reported liability could be made more meaningful to decisionmakers if it was presented by approximate time periods when the disposals are expected to occur. Such information could provide important context for congressional and other budget decisionmakers on the total liability by showing the annual impact of disposals that have already occurred or are expected to occur during the budget period. Furthermore, if the time periods used to present these data were consistent with the timing of when funding was being requested for disposal costs as reflected in budget justification documents, such as DOD’s Future Years Defense Program, this type of disclosure would provide a link between budgetary and accounting information, one of the key objectives of the CFO Act. In addition, SFFAS No. 6, Accounting for Property, Plant, and Equipment, issued November 30, 1995, and effective beginning with fiscal year 1998, requires recognition of deferred maintenance amounts by major class of asset along with disclosure of the method used to measure the extent of deferred maintenance needed for each asset class. In our recent reporton DOD’s efforts to implement this standard as it relates to Navy ships, we stated that accurate reporting of deferred maintenance is important for key decisionmakers such as the Congress, DOD, and Navy managers and can be an important performance indicator of mission asset condition, which is a key readiness factor. While the existence of deferred maintenance may indicate a need for additional resources for maintenance, such resources may already be available within the current funding of the military services. As Navy and DFAS move to put in place the systems and procedures required to comply with these new accounting standards, they will not only be better able to prepare a more useful set of Navy financial statements but also to better support more informed programmatic and budgetary decision-making in these areas. Currently, the Navy is unable to produce accurate financial information needed to support either its financial statements or operations and budgetary decision-making. However, through the impetus provided by the CFO Act, it has an opportunity to better integrate financial information into budget and operational management decisions. To seize this opportunity, the Navy and DFAS must establish a greater linkage between financial statement preparation and reporting processes, and resource allocation and oversight decisions. However, such a linkage will yield the benefits envisioned by the CFO Act only if the Navy’s financial information is dramatically improved to the point where it is generated by a systematic process and its accuracy can be verified. Auditable financial statements produced by this type of disciplined process provide the Congress and managers with assurance that the information being used to support the statements is accurate and can therefore be used with confidence for day-to-day decision-making. In this context, efforts to produce auditable financial statements on an annual basis should be viewed not as an end in itself but as the capstone of a vigorous financial management program supported by effective information systems that produce accurate, complete, and timely information for decisionmakers throughout the year. Achieving the far-reaching financial management goals established by the CFO Act, particularly in light of the serious and widespread nature of the Navy’s long-standing financial problems, will only be possible with the sustained, demonstrated commitment of top leaders in DOD, the Navy, and DFAS. In commenting on a draft of this report, DOD stated that it is firmly committed to providing taxpayers and the Congress with accurate financial statements that can pass rigorous audit tests. DOD also said that for some time it has acknowledged that significant improvements are required in its financial management systems and reporting, and that many of the problems found during the audits of the Navy’s fiscal year 1996 financial statements remain. It also stated that financial management is a high priority in DOD and that it is working to improve the basic financial procedures and systems used to collect, categorize, and report financial transactions. DOD expressed concern with what it termed the report’s implication that the Navy’s budget is overstated or could be reduced because its financial statements omitted a line, excluded a footnote, or were otherwise deficient. DOD stated that such an implication is grossly misleading and undermines the rigorous planning, programming, and budgeting process within both DOD and the Navy. In addition, DOD maintained that the report leaves the erroneous impression that there have been no significant improvements in the Navy’s financial operations since our review of the Navy’s fiscal year 1986 financial reports. Furthermore, DOD stated that the report makes broad assertions that deficiencies in the Navy’s financial statements adversely impact the ability to make informed programmatic and budgetary decisions. In this regard, DOD contended that the report did not acknowledge that many of the deficiencies cited, including those from audit reports, are reviewed as part of the Navy’s day-to-day management and internal budget review processes, and again by the Office of the Secretary of Defense. We disagree that our report implies that the Navy’s budget is overstated or could be reduced merely because data were omitted from the Navy’s financial statements or because the statements were deficient in some other way. Our report focuses on deficiencies in the management systems and processes that are used to support not only the Navy’s financial statement preparation, but its budgetary and program decision-making. As a result, the deficiencies discussed in our report focus on those errors or omissions in the Navy’s financial reporting that also raise serious questions about whether decisionmakers had sufficiently reliable information available to make informed budgetary resource allocation decisions. With respect to DOD’s assertion that our report provides a misleading impression that there have been no significant improvements in Navy’s financial operations, our finding that the Navy has been plagued with troublesome financial management problems for many years is warranted. We have not seen the level of expected improvement in the years that have passed since our report on the Navy’s fiscal year 1986 financial reporting. While we are encouraged with DOD’s stated high priority commitment to reforming its financial operations, significant errors, omissions, and misstatements remain uncorrected, as evidenced by the extent and nature of the deficiencies pointed out in auditors’ reports on their examination of the Navy’s fiscal year 1996 financial statements. Efforts to reform DOD’s financial operations, however well-intentioned, have not as yet resulted in the level of improvements needed to put in place a disciplined financial operation that will not only yield accurate, reliable information for the Navy’s financial statements, but also support its program and budget decision-making. It is for this reason that DOD financial management is on our list of high-risk government programs. Lastly, we are encouraged that the Navy auditors’ findings have been used and that the Navy has found them helpful in developing budget estimates. In addition, while the Navy’s planning, programming, and budgeting process was not the focus of the review requested for this report, we recognize that it has been in place for many years and is intended to provide a thorough review of all pertinent information, including the implications of auditors’ findings, in determining Navy budget estimates. However, the Navy should not be forced to rely on such alternative data development and validation procedures as a proxy for a systematic, disciplined financial management and reporting process. Such a process would provide accurate and reliable financial data to support the development of the Navy’s financial statements, as well as day-to-day program and budget decision-making. We are sending copies of this report to the Ranking Minority Member of the House Committee on the Budget, the Director of the Office of Management and Budget, the Secretary of Defense, the Secretary of the Navy, and the Director of the Defense Finance and Accounting Service. We will also send copies to other interested parties upon request. Please contact me at (202) 512-9095 if you or your staff have any questions concerning this report. Major contributors are listed in appendix IV. Depot Maintenance - Naval Shipyards and Financial statements prepared and opinion report issued. Financial statements prepared and reviewed, but no opinion report issued. Financial statements prepared but not reviewed. Department of the Navy Fiscal Year 1996 Annual Financial Report: Report on Auditor’s Opinion (Report No. 022-97, March 1, 1997). Department of the Navy Fiscal Year 1996 Annual Financial Report: Report on Internal Controls and Compliance with Laws and Regulations (Report No. 029-97, April 15, 1997). Department of the Navy Fiscal Year 1996 Annual Financial Report: Fund Balance with Treasury and Cash and Other Monetary Assets (Report No. 004-98, October 31, 1997). Department of the Navy Fiscal Year 1996 Annual Financial Report: Property, Plant, and Equipment, Net (Report No. 051-97, September 25, 1997). Department of the Navy Fiscal Year 1996 Annual Financial Report: Government Property Held by Contractors (Report No. 046-97, August 14, 1997). Department of the Navy Fiscal Year 1996 Annual Financial Report: Ammunition and Ashore Inventory (Report No. 048-97, September 25, 1997). Department of the Navy Fiscal Year 1996 Annual Financial Report: Advances and Prepayments, Non-Federal (Report No. 049-97, September 19, 1997). Department of the Navy Fiscal Year 1996 Annual Financial Report: Accounts Receivable, Net (Report No. 045-97, August 12, 1997). Department of the Navy Fiscal Year 1996 Annual Financial Report: Accounts Payable and Accrued Payroll and Benefits (Report No. 006-98, November 14, 1997). Department of the Navy Fiscal Year 1996 Annual Financial Report: Department of Defense Issues (Report No. 015-98, December 19, 1997). Fiscal Year 1996 Consolidating Financial Statements of the Department of the Navy Defense Business Operations Fund (Report No. 040-97, June 16, 1997). The following are GAO’s comments on the Department of Defense’s letter dated March 9, 1998. 1. See the “Agency Comments and Our Evaluation” section of this report. 2. Our analysis of the Naval Audit Service reports considered the Under Secretary of Defense (Comptroller) and Defense Finance and Accounting Service comments that were included in the reports. 3. As stated in the report, the Navy, like all other federal entities, has been required to prepare and submit a prescribed set of financial information to the Treasury since 1950. In addition, the federal financial accounting standards to which DOD refers were, for the most part, not required or implemented in the fiscal year 1996 statements. We refer to these standards only in the report’s discussion of financial data that will be available when DOD fully implements these provisions. 4. The report was revised to indicate that the checks returned to DFAS applied not only to the Navy, but also to the other military services and Defense agencies. 5. To ensure proper payment, financial management personnel are dependent upon obtaining accurate and complete contract information. To the extent that the financial systems do not contain accurate and complete information from feeder systems or the feeder systems provide erroneous information on, for example, contract modifications, overpayments can occur. 6. As discussed in our August 1996 report, we disagree that operating materials and supplies held on board ships are considered to be in the hands of end users. These items should be reported on the Navy’s financial statements as operating materials and supplies. In addition, we agree that decisions on inventory purchases are not based on amounts reported in the Navy’s financial statements (or, as in the case of the $7.8 billion in operating materials and supplies, amounts excluded from the statements). However, as discussed in our report, the Navy auditors and we have found deficiencies in the management systems and processes which are used not only to support the inventory values included in the Navy’s financial statements, but also to support the Navy’s budgetary and program decision-making concerning needed inventories. As a result, the deficiencies discussed in our report concern not just errors or omissions in the Navy’s financial reporting, but also raise questions about whether decisionmakers had sufficiently reliable information available on which to make informed budgetary resource allocation decisions. 7. Undistributed collections and disbursements represent amounts reflected in Treasury’s records but not recorded by the Navy. The Navy then recorded these amounts in its department-level accounting records without having corroborating support in the form of transaction detail needed to verify that these amounts accurately represent Navy activities. As a result, the Navy does not know whether its records are accurate. 8. While DOD has efforts underway that are intended to match disbursements against valid obligations before payment, this is not currently required for all payments. Consequently, until DOD can establish controls to ensure that all disbursements can be related to a valid obligation at the time of payment, DOD cannot rely on its obligation records for funds control purposes and will continue to lack assurance that it will have sufficient funding available to pay its expenses. 9. DOD’s comment concerning an adequate accounting system at the DFAS Cleveland Center relates to a quote from a Naval Audit Service report and has no impact on the point being made in our report. 10. We disagree that simply recording obligations ensures that fund balances are not exceeded. DOD, under law, must maintain accurate and reliable obligation and disbursement records. The Antideficiency Act prohibits not only overobligations but overexpenditures as well. Obligated balances forecast expenditures and, in that regard, offer some measure of funds control by, in effect, “setting aside” funds for these projected amounts. However, even if all obligations have been recorded, actual expenditures can be more (or less), making it necessary to adjust obligated amounts when payment occurs. By not matching payments to obligations at the time of disbursement, the Navy has undermined this control feature. 11. The report was revised to omit reference to the specific Antideficiency Act violations previously reported by the Navy. 12. The report was revised to indicate that DOD officials stated that the entire $2.5 million discussed in the Naval Audit Service report may not represent erroneous or duplicate payments. 13. After an appropriation cancels, Public Law 101-510 permits agencies to liquidate obligations that had been properly charged to the appropriation during its period of availability. However, the liquidation must be from current funds available for the same purpose, and the agency may not charge expenditures against such accounts in excess of the lesser of 1 percent of that appropriation or the unexpended balance of the cancelled appropriation. To track compliance with these limitations, agencies need to maintain in their records for the cancelled appropriation memorandum account entries to track transaction amounts. We do not agree that maintaining memorandum account balances requires the reopening of cancelled accounts, as implied by DOD’s comments. Public Law 101-510 prohibits agencies from using cancelled appropriations for any purpose whatsoever. As indicated in our report, reopening cancelled accounts provides an opportunity for an agency to inappropriately charge current disbursements against reopened cancelled appropriations, thereby weakening the controls the Congress established in Public Law 101-510. 14. While information on the status of the Navy’s use of its resources is currently available, it has not been audited. Only when this information is compiled through a disciplined process that can withstand the rigors of a financial audit test will congressional and Navy decisionmakers have assurance that this information is accurate and reliable. 15. We agree that OMB is responsible for providing minimum guidance for all agencies to follow in preparing their financial statements. However, it remains the responsibility of each agency to expand on these minimum requirements, as appropriate, so that its financial statements (1) provide sufficiently detailed information on the unique circumstances and operations of that agency and (2) are most relevant and informative for oversight officials and other users. 16. While the Navy was required to record a liability for certain environmental cleanup costs based on existing accounting standards at the date of the financial statements, this report addresses audited information that will be available upon full implementation of the federal financial accounting standards. As a result, the report was revised to delete reference to a Naval Audit Service finding concerning reporting a projected environmental cleanup cost liability. William Cordrey, Senior Auditor The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reported on the programmatic and budgetary implications of the financial data deficiencies enumerated by auditors' examination of the Department of the Navy's fiscal year 1996 financial statements. GAO noted that: (1) the extent and nature of the Navy's financial deficiencies identified by auditors, including those that relate to supporting management systems, increase the risk of waste, fraud, and misappropriation of Navy funds and can drain resources needed for defense mission priorities; (2) critical weaknesses identified include the following: (a) information on $7.8 billion in inventories on-board ships was not included in Navy's year-end financial statements; (b) failure to follow prescribed procedures for controlling Navy's cash account with Treasury contributes to continuing disbursement accounting problems; (c) until duplicate and erroneous vendor payments were identified and collected as a result of financial audit, the Navy not only paid too much for goods and services but, more importantly, was unable to use these funds to meet other critical programmatic needs; and (d) breakdowns in the controls relied on to prevent or detect material financial errors mean that the Navy cannot tell if its business-type support operations are operating on a break-even basis as intended; (3) although the Navy's 1996 financial statements--its first effort to prepare comprehensive financial statements--did not include all required information and were not verifiable, they still provided data GAO could use to identify several financial issues that may be of interest to budget and program managers; (4) for example, footnote disclosures on the Navy's accounts receivable and unexpended appropriations raise questions about whether future budget resources may be needed or whether there may be opportunities to reduce resource requirements; (5) when the findings presented in the auditors' reports are corrected, the financial statements themselves and related notes can become an excellent source of information on the financial condition and operations of the Navy; and (6) also, if properly implemented, new accounting standards that require information such as data on asset disposal costs and deferred maintenance will provide the Navy and the Defense Finance and Accounting Service with an opportunity to improve the extent and usefulness of information that is currently available to support program decision-making and accountability in these areas. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The current State IG was created by a 1986 amendment to the Inspector General Act of 1978 (IG Act) to prevent and detect fraud, waste, abuse, and mismanagement in the department’s programs and operations; conduct and supervise independent audits and investigations; and recommend policies to promote economy, efficiency, and effectiveness. Unique to the State IG is a requirement to provide inspections of the department’s Foreign Service posts, bureaus, and operating units. The State Department has had inspection functions in various forms since 1906. The function has changed and evolved over the years in response to numerous statutory changes. Since the terrorist attacks of September 11, 2001, the State Department has become involved in expanded reconstruction and stabilization roles and manages a global presence that includes mobilizing some 180 countries and territories in the war on terrorism. To manage this expanded role, the State Department’s budget has increased over fiscal years 2001 through 2006 from $13.7 billion to about $24 billion, an increase of about 75 percent (55 percent in constant dollars adjusted for inflation). At the same time, the State IG’s budget has been inadequate and its workforce has declined by approximately 20 percent. For example, from 2001 to 2006, the State IG’s budget for oversight has increased from $29 million to $31 million, which when considered relative to inflation, is a budget decrease of approximately 6 percent over 6 years in constant dollars. During that same period, the State IG’s staffing level has declined from 227 to 182. Of the 318 authorized staff in the State IG’s fiscal year 2006 budget, the actual onboard staff averaged 182, or about 57 percent of the authorized level. (See fig. #1.) In the State Department’s Performance and Accountability Report for fiscal year 2006, the State IG reported the need for expanded oversight to encompass new department initiatives in transformational diplomacy, global repositioning, and public diplomacy, as well as substantial increases in programs for Iraq and Afghanistan, counternarcotics, counterterrorism, embassy construction, and information technology. In addition, the IG has noted significant growth in the number of programs and grants with mandated IG oversight, congressional and management requests for special reviews and investigations, and opportunities for joint activities with other departments. The 1986 amendment that created the current IG office was a reaction to concerns expressed in prior GAO reports in 1978 and 1982. In those reports, we raised concerns about the independence of the previous IG offices established administratively by the department and through statutes prior to 1986. At the same time, our concerns about the State IG’s independence were based in part on the IG’s use of temporarily assigned Foreign Service officers to staff the IG office for performing inspections. We continue to be concerned about the independence of the State IG, an issue that we first reported on almost three decades ago. Independence is the cornerstone of professional auditing. Without independence, an audit organization cannot conduct independent audits in compliance with generally accepted government auditing standards (Government Auditing Standards). Likewise, an IG who lacks independence cannot effectively fulfill the full range of requirements for the office. Lacking this critical attribute, an audit organization’s work might be classified as studies, research reports, consulting reports, or reviews, rather than independent audits. Independence is one of the most important elements of an effective IG function. In fact, much of the IG Act provides specific protections to IG independence that are unprecedented for an audit and investigative function located within the organization being reviewed. These protections are necessary in large part because of the unusual reporting requirements of the IGs, who are both subject to the general supervision and budget processes of the agencies they audit, while at the same time being expected to provide independent reports of their work externally to the Congress. Government Auditing Standards states, “in all matters relating to the audit work, the audit organization and the individual auditor, whether government or public, must be free from personal, external, and organizational impairments to independence, and must avoid the appearance of such impairments to independence. Auditors and audit organizations must maintain independence so that their opinions, findings, conclusions, judgments, and recommendations will be impartial and viewed as impartial by objective third parties with knowledge of the relevant information.” Personal independence applies to individual auditors at all levels of the audit organization, including the head of the organization. Personal independence refers to the auditor’s ability to remain objective and maintain an independent attitude in all matters relating to the audit, as well as the auditor’s ability to be recognized by others as independent. The auditor needs an independent and objective state of mind that does not allow personal bias or the undue influence of others to override the auditor’s professional judgments. This attitude is also referred to as intellectual honesty. The auditor must also be free from direct financial or managerial involvement with the audited entity or other potential conflicts of interest that might create the perception that the auditor is not independent. External independence refers to both the auditor’s and the audit organization’s freedom to make independent and objective judgments free from external influences or pressures. Examples of impairments to external independence include restrictions on access to records, government officials, or other individuals needed to conduct the audit; external interference over the assignment, appointment, compensation, or promotion of audit personnel; restrictions on funds or other resources provided to the audit organization that adversely affect the audit organization’s ability to carry out its responsibilities; or external authority to overrule or to inappropriately influence the auditors’ judgment as to appropriate reporting content. Organizational independence refers to the audit organization’s placement in relation to the activities being audited. Professional auditing standards have different criteria for organizational independence for external and internal audit organizations. The IGs, in their statutory role of providing oversight of their agencies’ operations, represent a unique hybrid of external and internal reporting responsibilities. The IG Act requires IGs to perform audits in compliance with Government Auditing Standards. In addition, much of the act provides specific protections to IG independence for all the work of the IGs. Protections to IG independence include the requirement that IGs report only to their agency heads and not to lower-level management, and a prohibition on the ability of the agency head to prevent or prohibit the IG from initiating, carrying out, or completing any audit or investigation. This prohibition is meant to protect the IG office from external forces that could compromise an IG’s independence. The IG’s personal independence and the need to appear independent to knowledgeable third parties is also critical when the IG makes decisions related to the nature and scope of audit and investigative work performed by the IG office. The IG must determine how to utilize the IG Act’s protection of independence in conducting and pursuing the audit and investigative work. The IG’s personal independence is necessary to make the proper decisions in such cases. The IG Act also provides the IG with protections to external independence by providing access to all agency documents and records, prompt access to the agency head, the ability to select and appoint IG staff, the authority to obtain services of experts, and the authority to enter into contracts. The IG may choose whether to exercise the act’s specific authority to obtain access to information that is denied by agency officials. Again, each IG must make decisions regarding the use of the IG Act’s provisions for access to information, and the IG’s personal independence becomes key in making these decisions. The IGs’ external reporting requirements in the IG Act include reporting the results of their work in semiannual reports to the Congress. Under the IG Act, the IGs are to report their findings without alteration by their respective agencies, and these reports are to be made available to the general public. The IG Act also directs the IGs to keep their agency heads and the Congress fully and currently informed, which they do through these semiannual reports and otherwise, of any problems, deficiencies, abuses, fraud, or other serious problems relating to the administration of programs and operations of their agencies. Also, the IGs are required to report particularly serious or flagrant problems, abuses, or deficiencies immediately to their agency heads, who are required to transmit the IG’s report to the Congress within 7 calendar days. With the growing complexity of the federal government, the severity of the problems it faces, and the fiscal constraints under which it operates, it is important that an independent, objective, and reliable IG structure be in place at federal agencies to ensure adequate audit and investigative coverage of federal programs and operations. The IG Act provides each IG with the ability to exercise judgment in the use of protections to independence specified in the act. While the IG Act provides for IG independence, the ultimate success or failure of an IG office is largely determined by the individual IG placed in that office and that person’s ability to maintain personal, external, and organizational independence both in fact and appearance while reporting the results of the office’s work to both the agency head and to the Congress. An IG who lacks independence cannot effectively fulfill the full range of requirements for the office. Two continuing areas of concern that we have with the independence of the office of the State IG involve (1) the appointment of management officials to head the State IG in an acting capacity for extended periods of time and (2) the use of Foreign Service staff to lead State IG inspections. These concerns are similar to those independence issues we reported in our 1978 and 1982 reports. In 1978, GAO reviewed the operations of the office of the IG of Foreign Service and questioned the independence of Foreign Service officers who were temporarily detailed from program offices to the IG’s office. In 1982, we reviewed the operations of the IG and again expressed our concerns about the independence of inspection staff reassigned to and from management offices within the department. In these reports we stated that the desire of State IG staff to receive favorable assignments after their State IG tours could influence their objectivity. Reacting to concerns similar to those in our 1982 report, the Congress established an IG for the Department of State through amendments to the IG Act in both 1985 and 1986. The 1986 amendment requires the State IG continue to perform inspections of the department’s bureaus and posts, but also prohibits a career member of the Foreign Service from being appointed as the State IG. After almost three decades, we continue to have similar concerns regarding the independence of the State IG’s operations. In our March 2007 report we stated that during a period of approximately 27 months— from January 24, 2003, until May 2, 2005—four management officials from the State Department were acting in an IG capacity. All four of these officials served in the Foreign Service in prior line management positions, including political appointments as U.S. ambassadors to foreign countries. In addition, three of these officials returned to significant management positions within the State Department after heading the IG office. Therefore, over more than a 2-year period, oversight of the State Department was being provided by the department’s own management officials. The 1986 amendment to the IG Act that created the current IG office prohibits a career Foreign Service official from becoming an IG of the State Department due to concerns about personal impairments to independence that could result. That same concern exists when Foreign Service officials head the State IG in an acting capacity, resulting in limitations to the independence and effectiveness of the office. The second continuing concern discussed in our March 2007 report regarding State IG independence deals with the use of Foreign Service officers to lead inspections of the department’s bureaus and posts. This practice creates the mistaken impression that because these inspections are products of an IG office, they are performed with the appropriate IG independence. However, State IG policy is for inspections to be led by Foreign Service officers at the ambassador level who are expected to help formulate, implement, and defend government policy. The resulting conflict of interest for career Foreign Service staff and others at the ambassador level who lead inspections that may criticize the department’s policies provides an appearance of impaired independence to the State IG’s inspection results. To address these concerns about the independence of the State IG Office, we recommended in our March 2007 report that the IG work with the Secretary of State to develop a succession planning policy that would prohibit career Foreign Service officers and other department managers from heading the State IG office in an acting capacity and to develop options to ensure that State IG inspections are not led by career Foreign Service officials or other staff who rotate to assignments within State Department management. In formal comments to a draft of our March 2007 report, the State IG agreed with our concerns about having career Foreign Service officers serving in an acting IG capacity and acknowledged that the temporary nature of such arrangements can have a debilitating effect on the office particularly over a lengthy period of time. However, the State IG disagreed with our recommendation that personnel with State Department management careers also not be considered for acting IG positions due to the need to obtain prompt and capable personnel to fill these positions. Also, the State IG agreed that use of Foreign Service personnel at the ambassador level to lead inspections does create an appearance of impaired independence; however, the IG plans to continue this practice in order to utilize the diplomatic expertise of these Foreign Service officers, which the IG believes is necessary for inspections. We disagree with the State IG’s comments. Independence is a critical element for IG effectiveness and success and is at the heart of auditing standards and the IG Act. The State IG’s reluctance to take steps that would preclude career management officials from leading the office in an acting IG capacity and to stop the practice of having Foreign Service officers at the ambassador level lead inspections weakens the credibility of the entire office. For example, appointing career department managers as acting State IGs could have the practical effect of subjecting the State IG to supervision by management officials other than the Secretary or Deputy Secretary. As noted above, the IG Act limits supervision of the IG to the head of the department or the principal deputy rather than lower- level managers as an important protection to the IG’s independence. In addition, the State IG’s decision to accept impairments to the appearance of independence for all inspections performed at the department limits the usefulness of these results for both the department and the Congress in taking appropriate actions. We agree that Foreign Service expertise could be a part of the inspection team, but we disagree with placing independence second to experience and expertise. The State IG can achieve both objectives with the proper staffing and structuring of its inspections. To illustrate, our position remains that the State IG’s inspection teams should not be led by career Foreign Service officers and ambassadors, but could include experienced ambassadors and staff at the ambassador level as team members, consultants, or advisors to help mitigate concerns about the appearance of independence caused by the State IG’s current practice. In addition to the specific requirements for independent audits and investigations, the State IG has a unique statutory requirement to inspect each post at least every 5 years. However, since 1996, the Congress, through the department’s appropriations acts, annually waives the 5-year requirement. Nevertheless, the State IG completed inspections at 223 of the department’s 260 bureaus and posts over the 5-year period of fiscal years 2001 through 2005. Consequently, the State IG relies on inspections rather than audits to provide the primary oversight of the State Department. As a comparison, in fiscal year 2005, the statutory IGs issued a total of 443 inspection reports compared to 4,354 audit reports, a ratio of inspections to audits of about 1 to 10. During the same year, the State IG issued 99 inspection reports and 44 audit reports during fiscal year 2005, or a ratio of inspections to audits of over 2 to 1. A troubling outcome of the State IG’s heavy emphasis on inspections is the resulting gaps in audit coverage for high-risk areas we have identified and the management challenges reported annually by the State IG in the department’s performance and accountability reports. In our reports of the government’s high-risk areas issued in January 2003 and January 2005, we identified seven such areas at the State Department, which were also included in management challenges identified by the State IG. These critical areas are (1) the physical security and protection of people and facilities, (2) information security, (3) financial management, (4) human resources, (5) counterterrorism and border security, (6) public diplomacy, and (7) postconflict stabilization and reconstruction. To illustrate the State IG’s reliance on inspections for oversight of these areas during fiscal years 2004 and 2005 combined, the State IG covered human resource issues with 1 audit and 103 inspections, counterterrorism and border security with 2 audits and 190 inspections, public diplomacy with 2 audits and 103 inspections, and information security with 1 audit and 13 inspections. (See table 1.) The high-risk areas of physical security and protection of people and facilities had limited audit coverage that addressed specific contracts and procurements, whereas financial management was covered by the State IG’s financial audits. Postconflict stabilization and reconstruction was covered by both audits and inspections. Because of State IG’s heavy reliance on inspections, it is important to note that there are fundamental differences between inspections and audits. Audits performed under Government Auditing Standards are subject to more in-depth requirements in the areas of sufficient, appropriate, relevant, and complete evidence and documentation supporting the findings than are inspections performed under the Quality Standards for Inspections. Also, auditing standards require independent external quality reviews of audits, or peer reviews, on a 3-year cycle, while inspection standards do not call for any such external quality reviews. We reviewed the documentation for 10 State IG inspections to gain an understanding of the extent of documented evidence to support each report’s findings and recommendations. We found that the inspectors relied heavily on questionnaires completed by management at each bureau or post that was inspected, official department documents, correspondence and electronic mail, internal department memorandums, interviews, and the inspection review summaries. We did not find any examples of additional testing of evidence or sampling of agency responses to questionnaires and interviews to test for the accuracy, relevance, validity, and reliability of the information as would be required by auditing standards. In other words, for the inspections we reviewed, the State IG’s results relied on the responses of department management through questionnaires, interviews, and agency documents without further verification. We also found that for 43 of the 183 recommendations contained in the 10 inspections we reviewed, the inspection files did not contain documented support of any kind beyond the written summaries of the findings and recommendations contained in the final inspection reports. While the State IG’s inspection policies require that supporting documentation be attached to the written summaries, the summaries indicated that there was no additional supporting documentation. Due to the significance of the high-risk areas covered largely by inspections, the limited nature of inspections, and the appearance of impaired independence, the State IG would benefit by reassessing the mix of audit and inspection coverage for those areas. In our March 2007 report, we recommended that in order to provide the appropriate breadth and depth of oversight coverage at the department, especially in high-risk areas and management challenges, the State IG reassess the proper mix of audit and inspection coverage. This assessment should include an analysis of an appropriate level of resources needed to address the increasing growth of the department’s risks and responsibilities. In formal comments on our report, the State IG disagreed with our recommendation to reassess the mix of audit and inspection coverage while agreeing that inspections are much more subjective than audits and have a different level of requirements for evidence. The State IG explained that the use of inspections is due to the congressional mandate for IG inspections, which has been waived annually late in the IG’s planning cycle, and the limited resources to hire more auditors. Therefore, things that could be done in an audit have to be done through inspections. We remain concerned that the State IG’s current mix of audits and inspections does not provide adequate independent oversight. In addition, the State IG’s use of inspections can create an “expectation gap” that inspections will have the same credibility and independence as the IG’s audits. By ultimately placing the results of inspections in the IG’s semiannual reports without clarifying that they are a substitute for audit coverage and are fundamentally limited in their results, the IG may be creating a misleading image of oversight coverage of the department and its high-risk areas. The IG Act, as amended, established the State IG to conduct and supervise independent investigations, in addition to audits, in order to prevent and detect fraud, waste, abuse, and mismanagement in the State Department. In addition, the department’s Bureau of Diplomatic Security (DS), as part of its worldwide responsibilities for law enforcement and security operations, also performs investigations that include passport and visa fraud both externally and within the department. While both the State IG and DS pursue allegations of passport and visa fraud by State Department employees, DS reports organizationally to the State Department Undersecretary for Management and is performing investigations as a function of management. Therefore, DS investigations of department employees, especially when management officials are the subjects of allegations, can result in management investigating itself. In contrast, the State IG is required by the IG Act to be independent of the offices and functions it investigates. However, State IG officials stated that they were aware of DS investigations in these areas that were not coordinated with the State IG. Our March 2007 report noted that DS and the State IG had no functional written agreement or other formal mechanism in place to coordinate their investigative activities. Without a formal agreement to outline the responsibilities of both DS and the State IG regarding these investigations, there is inadequate assurance that this work will be coordinated to avoid duplication or that independent investigations of department personnel will be performed. Moreover, we also reported that in fiscal year 2005, DS entailed a global force of approximately 32,000 special agents, security specialists, and other professionals who make up the security and law enforcement arm of the State Department. In contrast, the State IG, which also has global responsibilities for independent investigations of the State Department, had a total of 21 positions in its investigative office with 10 investigators onboard at the time of our review. In other federal agencies where significant law enforcement functions like those of DS exist alongside their IGs, the division of investigative functions between the agency and the IG is established through written agreement. Our March report provides examples of formal written agreements between (1) the U.S. Postal Service IG and the Chief Postal Inspector who heads the U.S. Postal Inspection Service and (2) the Treasury Inspector General for Tax Administration and the Internal Revenue Service’s Criminal Investigation. These signed memorandums can serve as models for a formal agreement between DS and the State IG for delineating jurisdiction in investigative matters to help ensure that the independence requirements of the IG Act are implemented. In order to provide for independent investigations of State Department management and to prevent duplicative investigations, we recommended in our March 2007 report that the State IG work with DS and the Secretary of State to develop a formal, written agreement that delineates the areas of responsibility for State Department investigations. In comments on our report, the State IG agreed with this recommendation. The mission of the State IG is critical to providing independent and objective oversight of the State Department and identifying any mismanagement of scarce taxpayer dollars. However, the effectiveness of the IG’s oversight is limited by the lack of resources, the lack of an appearance of independence, gaps in audit coverage of high-risk areas, and the lack of assurance that investigations of internal department operations are performed by independent IG investigators. We made recommendations to address each of these areas in our related report (GAO-07-138). Overall, our recommendations are intended to assist in strengthening the IG office and the independence and effectiveness of oversight of the State Department. We remain concerned about the weaknesses identified especially in light of the State IG’s response to our March 2007 report. The State IG’s comments to our report defend the status quo, and indicate an inadequate concern and regard for the independence necessary to provide effective and credible oversight of the department. Consequently, we reiterated the importance of our recommendations because of our continuing concerns about the adequacy of independent oversight provided by the State IG. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other members of the subcommittee might have at this time. If you have any additional questions on matters discussed in this testimony, please contact Jeanette Franzel at (202) 512-9471 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Other key contributors to this testimony include Jackson Hufnagle (Assistant Director) and Clarence Whitt. Corporation for National and Community Service Treasury Inspector General for Tax Administration Tennessee Valley Authority (TVA) National Aeronautics and Space Administration Department of Housing and Urban Development Department of Health and Human Services Department of Defense – Military nabudgetry rerce ppering in the Agency for Interntionl Development’s FY 2006 Performnce nd Accontability Report. pector Generl for Tx Adminitrtion i the IG for the Internl RevenService (IRS). nt for TVA IG i from PCIE. te Deprtment budget doe not inclde mont for the Brodcasting Bord of Governor. de budget authority to combat Medicre nd Medicid fraud. rtment of the Treasury’s budgetry rerce exclde IRS. tion i not ilable. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | GAO was asked to provide testimony about the effectiveness and reliability of the State Department's Office of Inspector General (State IG). We focused on the independence of the State IG, the use of inspections instead of audits to provide oversight of the department, and the effectiveness of the IG's investigative function. The testimony is based primarily on our March 2007 report, Inspectors General: Activities of the Department of State Office of Inspector General (GAO-07-138). The effectiveness of the oversight provided by the State IG is limited by (1) a lack of resources, (2) structural independence issues, (3) gaps in audit coverage, and (4) the lack of assurance that the department obtains independent IG investigations. These limitations serve to reduce the credibility and oversight provided by the State IG. From fiscal years 2001 through 2006, the State Department's budgets have increased from $13.7 billion to about $24 billion, an increase of almost 75 percent (or 55 percent in constant dollars adjusted for inflation) in order to manage an expanding role in the global war on terrorism. During this same period, the State IG's budget increased from $29 million to $31 million, which when adjusted for inflation is a decrease of about 6 percent in constant dollars. In addition, of the 318 authorized staff in the State IG's fiscal year 2006 budget, the actual onboard staff averaged 182, or about 57 percent of the authorized level and about 20 percent less than in fiscal year 2001. We continue to identify concerns regarding the independence of the State IG that are similar to concerns we reported almost three decades ago. Independence is critical to the quality and credibility of all the work of the State IG and is one of the most important elements of the overall effectiveness of the IG function. Our concerns include (1) the appointment of line management officials to head the State IG in an acting capacity for extended periods, and (2) the use of ambassador-level Foreign Service staff to lead inspections of the department's bureaus and posts even though they may have conflicts of interest resulting from their roles in the Foreign Service. In addition, because the State IG provides oversight coverage of high-risk areas and management challenges primarily through inspections rather than audits, the department has significant gaps in audit oversight. Compared to audits, oversight provided by inspections is fundamentally limited. To illustrate, the Inspector General Act requires the State IG to follow Government Auditing Standards, while use of inspection standards are voluntary. In addition, unlike auditing standards, inspection standards do not require an external peer review of quality. The State IG's ratio of inspections to audits in fiscal year 2005 was 2 to 1 while the ratio for the statutory federal IG community was about 1 to 10. We reviewed 10 of the State IG's inspections performed over fiscal years 2004 and 2005 and found that they relied heavily on questionnaires completed by management at each bureau or post being inspected without verification or testing for accuracy. We also found that investigations of the State Department lack a formal written agreement between the State IG and DS. Such an agreement is critical to help ensure that investigations of internal department operations are performed by the IG and not by bureau investigators who report to department management. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
FDA is responsible for overseeing the safety and effectiveness of medical devices that are marketed in the United States, whether manufactured in domestic or foreign establishments. All establishments that manufacture medical devices for marketing in the United States are required to register annually with FDA. As part of its efforts to ensure the safety, effectiveness, and quality of medical devices, FDA is responsible for inspecting certain foreign and domestic establishments to ensure that, among other things, they meet manufacturing standards established in FDA’s quality system regulation. Within FDA, CDRH is responsible for assuring the safety and effectiveness of medical devices. Among other things, CDRH works with ORA, which conducts inspections of foreign establishments. FDA may conduct inspections before and after medical devices are approved or otherwise cleared to be marketed in the United States. Premarket inspections are conducted before FDA approves U.S. marketing of a new medical device that is not substantially equivalent to one that is already on the market. Premarket inspections primarily assess manufacturing facilities, methods, and controls and may verify pertinent records. Postmarket inspections are conducted after a medical device has been approved or otherwise cleared to be marketed in the United States and include several types of inspections: (1) Quality system inspections are conducted to assess compliance with applicable FDA regulations, including the quality system regulation to ensure good manufacturing practices and the regulation requiring reporting of adverse events. These inspections may be comprehensive or abbreviated, which differ in the scope of inspectional activity. Comprehensive postmarket inspections assess multiple aspects of the manufacturer’s quality system, including management controls, design controls, corrective and preventative actions, and production and process controls. Abbreviated postmarket inspections assess only some of these aspects, but always assess corrective and preventative actions. (2) For-cause and compliance follow- up inspections are initiated in response to specific information that raises questions or problems associated with a particular establishment. (3) Postmarket audit inspections are conducted within 8 to 12 months of a premarket application’s approval to examine any changes in the design, manufacturing process, or quality assurance systems. Requirements governing foreign and domestic inspections differ. Specifically, FDA is required to inspect domestic establishments that manufacture class II or III medical devices every 2 years. There is no comparable requirement to inspect foreign establishments. FDA does not have authority to require foreign establishments to allow the agency to inspect their facilities. However, if an FDA request to inspect is denied, FDA may prevent the importation of medical devices from that foreign establishment into the United States. In addition, FDA has the authority to conduct physical examinations of products offered for import and, if there is sufficient evidence of a violation, prevent their entry at the border. Unlike food, for which FDA primarily relies on inspections at the border, physical inspection of manufacturing establishments is a critical mechanism in FDA’s process to ensure that medical devices are safe and effective and that manufacturers adhere to good manufacturing practices. FDA determines which establishments to inspect using a risk-based strategy. High priority inspections include premarket approval inspections for class III devices, for-cause inspections, inspections of establishments that have had a high frequency of device recalls, and other devices and manufacturers FDA considers high risk. The establishment’s inspection history may also be considered. A provision in FDAAA may assist FDA in making decisions about which establishments to inspect because this law authorizes the agency to accept voluntary submissions of audit reports addressing manufacturers’ conformance with internationally established standards for the purpose of setting risk-based inspectional priorities. FDA’s programs for foreign and domestic inspections by accredited third parties provide an alternative to the traditional FDA-conducted comprehensive postmarket quality system inspection for eligible manufacturers of class II and III medical devices. MDUFMA required FDA to accredit third persons—which are organizations—to conduct inspections of certain establishments. In describing this requirement, the House of Representatives Committee on Energy and Commerce noted that some manufacturers have faced an increase in the number of inspections required by foreign countries and that the number of inspections could be reduced if the manufacturers could contract with a third-party organization to conduct a single inspection that would satisfy the requirements of both FDA and foreign countries. Manufacturers that meet eligibility requirements may request a postmarket inspection by an FDA-accredited organization. The eligibility criteria for requesting an inspection of an establishment by an accredited organization include that the manufacturer markets a medical device in the United States and markets (or intends to market) a medical device in at least one other country and that the establishment to be inspected must not have received warnings for significant deviations from compliance requirements on its last inspection. MDUFMA also established minimum requirements for organizations to be accredited to conduct third-party inspections, including protections against financial conflicts of interest and assurances of the competence of the organization to conduct inspections. FDA developed a training program for inspectors from accredited organizations that involves both formal classroom training and completion of three joint training inspections with FDA. Each individual inspector from an accredited organization must complete all training requirements successfully before being cleared to conduct independent inspections. FDA relies on manufacturers to volunteer to host these joint inspections, which count as FDA postmarket quality system inspections. A manufacturer that is cleared to have an inspection by an accredited third party enters an agreement with the approved accredited organization and schedules an inspection. Once the accredited organization completes its inspection, it prepares a report and submits it to FDA, which makes the final assessment of compliance with applicable requirements. FDAAA added a requirement that accredited organizations notify FDA of any withdrawal, suspension, restriction, or expiration of certificate of conformance with quality systems standards (such as those established by the International Organization for Standardization) for establishments they inspected for FDA. In addition to the Accredited Persons Inspection Program, FDA has a second program for accredited third-party inspections of medical device establishments. On September 7, 2006, FDA and Health Canada announced the establishment of PMAP. This pilot program was designed to allow qualified third-party organizations to perform a single inspection that would meet the regulatory requirements of both the United States and Canada. The third-party organizations eligible to conduct inspections through PMAP are those that FDA accredited for its Accredited Persons Inspection Program (and that completed all required training for that program) and that are also authorized to conduct inspections of medical device establishments for Health Canada. To be eligible to have a third- party inspection through PMAP, manufacturers must meet all criteria established for the Accredited Persons Inspection Program. As with the Accredited Persons Inspection Program, manufacturers must apply to participate and be willing to pay an accredited organization to conduct the inspection. FDA relies on multiple databases to manage its program for inspecting medical device manufacturing establishments. FDA’s medical device registration and listing database contains information on domestic and foreign medical device establishments that have registered with FDA. Establishments that are involved in the manufacture of medical devices intended for commercial distribution in the United States are required to register annually with FDA. These establishments provide information to FDA, such as an establishment’s name and its address and the medical devices it manufactures. Prior to October 1, 2007, this information was maintained in DRLS. As of October 1, 2007, establishments are required to register electronically through FDA’s Unified Registration and Listing System and certain medical device establishments pay an annual establishment registration fee, which in fiscal year 2008 is $1,706. OASIS contains information on medical devices and other FDA-regulated products imported into the United States, including information on the establishment that manufactured the medical device. The information in OASIS is automatically generated from data managed by Customs and Border Protection (CBP). These data are originally entered by customs brokers based on the information available from the importer. CBP specifies an algorithm by which customs brokers generate a manufacturer identification number from information about an establishment’s name, address, and location. FACTS contains information on FDA’s inspections, including those of domestic and foreign medical device establishments. FDA investigators enter information into FACTS following completion of an inspection. According to FDA data, there are more registered establishments in China and Germany reporting that they manufacture class II or III medical devices than in any other foreign countries. Canada and the United Kingdom also have a large number of registered establishments. FDA faces challenges in its program to inspect foreign establishments manufacturing medical devices. The databases that provide FDA with data about the number of foreign establishments manufacturing medical devices for the U.S. market have not provided it with an accurate count of foreign establishments for inspection. In addition, FDA conducted relatively few inspections of foreign establishments. Moreover, inspections of foreign medical device manufacturing establishments pose unique challenges to FDA—both in human resources and logistics. FDA’s databases on registration and imported medical devices have not provided an accurate count of establishments subject to inspection, although recent improvements to FDA’s medical device registration database may address some weaknesses. In January 2008, we testified that DRLS provided FDA with information about foreign medical device establishments and the products they manufacture for the U.S. market. According to DRLS, as of September 2007, 4,983 foreign establishments that reported manufacturing a class II or III medical device for the U.S. market had registered with FDA. However, these data contained inaccuracies because establishments may register with FDA but not actually manufacture a medical device or may manufacture a medical device that is not marketed in the United States. In addition, FDA did not routinely verify the data within this database. Recent changes to FDA’s medical device establishment registration process could improve the accuracy of its database. In fiscal year 2008, FDA implemented, in addition to its annual user fee, electronic registration and an active re-registration process for medical device establishments. According to FDA, about half of previously registered establishments had reregistered using the new system as of April 11, 2008. While FDA officials expect that additional establishments will reregister, they expect that the final result will be the elimination of establishments that do not manufacture medical devices for the U.S. market and thus a smaller, more accurate database of medical device establishments. FDA officials indicated that implementation of electronic registration and the annual user fee seemed to have improved the data so FDA can more accurately identify the type of establishment registered, the devices manufactured at an establishment, and whether or not an establishment should be registered. According to FDA officials, the revenue from device registration user fees is applied to the process for the review of device applications, including premarket inspections. FDA has also proposed, but not yet implemented, the Foreign Vendor Registration Verification Program, which could also help improve the accuracy of information FDA maintains on registered foreign establishments. Through this program, FDA plans to contract with an external organization to conduct on-site verification of the registration data and product listing information of foreign establishments shipping medical devices and other FDA-regulated products to the United States. FDA has solicited proposals for this contract, but it is still developing the specifics of the program. For example, as of April 2008, the agency had not yet established the criteria it would use to determine which establishments would be visited for verification purposes or determined how many establishments it would verify annually. FDA plans to award this contract in June 2008. Given the early stages of this process, it is too soon to determine whether this program will improve the accuracy of the data FDA maintains on foreign medical device establishments. FDA also obtains information on foreign establishments from OASIS, which tracks the importation of medical devices and other FDA-regulated products. While not intended to provide a count of establishments, OASIS does contain information about the medical devices actually being imported into the United States and the establishments manufacturing them. However, inaccuracies in OASIS prevent FDA from using it to develop a list of establishments subject to inspection. OASIS contains an inaccurate count of foreign establishments manufacturing medical devices imported into the United States as a result of unreliable identification numbers generated by customs brokers when the product is offered for entry. FDA officials told us that these errors result in the creation of multiple records for a single establishment, which results in inflated counts of establishments offering medical devices for entry into the U.S. market. According to OASIS, in fiscal year 2007, there were as many as 22,008 foreign establishments that manufactured class II medical devices for the U.S. market and 3,575 foreign establishments that manufactured class III medical devices for the U.S. market. FDA has supported a proposal with the potential to address weaknesses in OASIS, but FDA does not control the implementation of this proposed change. FDA is pursuing the creation of a governmentwide unique establishment identifier, as part of the Shared Establishment Data Service (SEDS), to address these inaccuracies. Rather than relying on the creation and entry of an identifier at the time of import, SEDS would provide a unique establishment identifier and a centralized service to provide commercially verified information about establishments. The standard identifier would be submitted as part of import entry data when required by FDA or other government agencies. SEDS could thus eliminate the problems that have resulted in multiple identifiers associated with an individual establishment. The implementation of SEDS is dependent on action from multiple federal agencies, including the integration of the concept into a CBP import and export system under development and scheduled for implementation in 2010. In addition, once implemented by CBP, participating federal agencies would be responsible for bearing the cost of integrating SEDS with their own operations and systems. FDA officials are not aware of a specific time line for the implementation of SEDS. Developing an implementation plan for SEDS was recommended by the Interagency Working Group on Import Safety. Although comparing information from its registration and import databases could help FDA determine the number of foreign establishments marketing medical devices in the United States, the databases do not exchange information to be compared electronically and any comparisons are done manually. FDA is in the process of implementing additional initiatives to improve the integration of its databases, and these changes could make it easier for the agency to establish an accurate count of foreign manufacturing establishments subject to inspection. The agency’s Mission Accomplishments and Regulatory Compliance Services (MARCS) is intended to help FDA electronically integrate data from multiple systems. It is specifically designed to give individual users more complete information about establishments. FDA officials estimated that MARCS, which is being implemented in stages, could be fully implemented by 2011 or 2012. However, FDA officials told us that implementation has been slow because the agency has been forced to shift resources away from MARCS and toward the maintenance of current systems that are still heavily used, such as FACTS and OASIS. Taken together, changes to FDA’s databases could provide the agency with more accurate information on the number of establishments subject to inspection. However, it is too early to tell whether this will improve FDA’s management of its inspection program. From fiscal year 2002 through fiscal year 2007, FDA inspected relatively few foreign medical device establishments and primarily inspected establishments located in the United States. During this period, FDA conducted an average of 247 foreign establishment inspections each year, compared to 1,494 inspections of domestic establishments. This average number of foreign inspections suggests that each year FDA inspects about 6 percent of registered foreign establishments that reported manufacturing class II or class III medical devices. FDA officials estimated the agency had inspected foreign class II manufacturers every 27 years and foreign class III manufacturers every 6 years. The inspected foreign establishments were in 44 foreign countries and more than two-thirds were in 10 countries. Most of the countries with the highest number of inspections were also among those with the largest number of registered establishments that reported manufacturing class II or III medical devices. The lowest rate of inspections in these 10 countries was in China, where 64 inspections were conducted in this 6-year period and 568 establishments were registered as of May 6, 2008. (See table 1.) FDA’s inspections of foreign medical device establishments were primarily postmarket inspections. While premarket inspections were generally FDA’s highest priority, relatively few have had to be performed in any given year. Therefore, FDA focused its resources on postmarket inspections. From fiscal year 2002 through fiscal year 2007, 89 percent of the 1,481 foreign establishment inspections were for postmarket purposes. Inspections of foreign establishments pose unique challenges to FDA— both in human resources and logistics. FDA does not have a dedicated cadre of investigators that only conduct foreign medical device establishment inspections; those staff who inspect foreign establishments also inspect domestic establishments. Among those qualified to inspect foreign establishments, FDA relies on staff to volunteer to conduct inspections. FDA officials told us that it has been difficult to recruit investigators to voluntarily travel to certain countries. However, they added that if the agency could not find an individual to volunteer for a foreign inspection trip, it would mandate the travel. Logistically, foreign medical device establishment inspections are difficult to extend even if problems are identified because the trips are scheduled in advance. Foreign medical device establishment inspections are also logistically challenging because investigators do not receive independent translational support from FDA or the State Department and may rely on English- speaking employees of the inspected establishment or the establishment’s U.S. agent to translate during an inspection. FDA recently announced proposals to address some of the challenges unique to conducting foreign inspections, but specific steps toward implementation and associated time frames are unclear. FDA noted in its report on revitalizing ORA that it was exploring the creation of a cadre of investigators who would be dedicated to conducting foreign inspections. However, the report did not provide any additional details or time frames about this proposal. In addition, FDA announced plans to establish a permanent presence overseas, although little information about these plans is available. FDA intends that its foreign offices will improve cooperation and information exchange with foreign regulatory bodies, improve procedures for expanded inspections, allow it to inspect facilities quickly in an emergency, and facilitate work with private and government agencies to assure standards for quality. FDA’s proposed foreign offices are intended to expand the agency’s capacity for overseeing, among other things, medical devices, drugs, and food that may be imported into the United States. The extent to which the activities conducted by foreign offices are relevant to FDA’s foreign medical device inspection program is uncertain. Initially, FDA plans to establish a foreign office in China with three locations—Beijing, Shanghai, and Guangzhou—comprised of a total of eight FDA employees and five Chinese nationals. The Beijing office, which the agency expects will be partially staffed by the end of 2008, will be responsible for coordination between FDA and Chinese regulatory agencies. FDA staff located in Shanghai and Guangzhou, who are to be hired in 2009, will be focused on conducting inspections and working with Chinese inspectors to provide training as necessary. FDA noted that the Chinese nationals will primarily provide support to FDA staff, including translation and interpretation. The agency is also considering setting up offices in other locations, such as India, the Middle East, Latin America, and Europe, but no dates have been specified. While the establishment of both a foreign inspection cadre and offices overseas have the potential for improving FDA’s oversight of foreign establishments, it is too early to tell whether these steps will be effective or will increase the number of foreign medical device establishment inspections. Few inspections of foreign medical device manufacturing establishments—a total of six—have been conducted through FDA’s two accredited third-party inspection programs, the Accredited Persons Inspection Program and PMAP. FDAAA specified several changes to the requirements for inspections by accredited third parties that could result in increased participation by manufacturers. Few inspections have been conducted through FDA’s Accredited Persons Inspection Program since March 11, 2004—the date when FDA first cleared an accredited organization to conduct independent inspections. Through May 7, 2008, four inspections of foreign establishments had been conducted independently by accredited organizations. As of May 7, 2008, 16 third-party organizations were accredited, and individuals from 8 of these organizations had completed FDA’s training requirements and been cleared to conduct independent inspections. FDA and accredited organizations had conducted 44 joint training inspections. As we previously reported, fewer manufacturers volunteered to host training inspections than have been needed for all of the accredited organizations to complete their training, and scheduling these joint training inspections has been difficult. FDA officials told us that, when appropriate, staff are instructed to ask manufacturers to host a joint training inspection at the time they notify the manufacturers of a pending inspection. FDA schedules inspections a relatively short time prior to an actual inspection, and as we previously reported, some accredited organizations have not been able to participate because they had prior commitments. We previously reported that manufacturers’ decisions to request an inspection by an accredited organization might be influenced by both potential incentives and disincentives. According to FDA officials and representatives of affected entities, potential incentives to participation include the opportunity to reduce the number of inspections conducted to meet FDA and other countries’ requirements. For example, one inspection conducted by an accredited organization was a single inspection designed to meet the requirements of FDA, the European Union, and Canada. Another potential incentive mentioned by FDA officials and representatives of affected entities is the opportunity to control the scheduling of the inspection by an accredited organization by working with the accredited organization. FDA officials and representatives of affected entities also mentioned potential disincentives to having an inspection by an accredited organization. These potential disincentives include bearing the cost for the inspection, doubts about whether accredited organizations can cover multiple requirements in a single inspection, and uncertainty about the potential consequences of an inspection that otherwise may not occur in the near future—consequences that could involve regulatory action. Changes specified by FDAAA have the potential to eliminate certain obstacles to manufacturers’ participation in FDA’s programs for inspections by accredited third parties that were associated with manufacturers’ eligibility. For example, a requirement that foreign establishments be periodically inspected by FDA before being eligible for third-party inspections was eliminated. Representatives of the two organizations that represent medical device manufacturers with whom we spoke about FDAAA told us that the changes in eligibility requirements could eliminate certain obstacles and therefore potentially increase manufacturers’ participation. These representatives also noted that key incentives and disincentives to manufacturers’ participation remain. FDA officials told us that they were revising their guidance to industry in light of FDAAA and expected to issue the revised guidance during fiscal year 2008. It is too soon to tell what impact these changes will have on manufacturers’ participation. FDA officials have acknowledged that manufacturers’ participation in the Accredited Persons Inspection Program has been limited. In December 2007, FDA established a working group to assess the successes and failures of this program and to identify ways to increase participation. Representatives of two organizations that represent medical device manufacturers told us that they believe manufacturers remain interested in the Accredited Persons Inspection Program. The representative of one large, global manufacturer of medical devices told us that it was in the process of arranging to have 20 of its domestic and foreign device manufacturing establishments inspected by accredited third parties. As of May 7, 2008, two inspections of foreign establishments had been conducted through PMAP, FDA’s second program for inspections by accredited third parties. Although it is too soon to tell what the benefits of PMAP will be, the program is more limited than the Accredited Persons Inspection Program and may pose additional disincentives to participation by both manufacturers and accredited organizations. Specifically, inspections through PMAP would be designed to meet the requirements of the United States and Canada, whereas inspections conducted through the Accredited Persons Inspection Program could be designed to meet the requirements of other countries. In addition, two of the five representatives of affected entities whom we spoke to for our January 2008 statement noted that in contrast to inspections conducted through the Accredited Persons Inspection Program, inspections conducted through PMAP could undergo additional review by Health Canada. Health Canada will review inspection reports submitted through this pilot program to ensure the inspections meet its standards. This extra review poses a greater risk of unexpected outcomes for the manufacturer and the accredited organization, which could be a disincentive to participation in PMAP that is not present with the Accredited Persons Inspection Program. Americans depend on FDA to ensure the safety and effectiveness of medical devices manufactured throughout the world. A variety of medical devices are manufactured in other countries, including high-risk devices designed to be implanted or used in invasive procedures. However, FDA faces challenges in inspecting foreign establishments. Weaknesses in its database prevent it from accurately identifying foreign establishments manufacturing medical devices for the United States and prioritizing those establishments for inspection. In addition, staffing and logistical difficulties associated with foreign inspections complicate FDA’s ability to conduct such inspections. The agency has recently taken some positive steps to improve its foreign inspection program, such as initiating changes to improve the accuracy of the data it uses to manage this program and announcing plans to increase its presence overseas. However, it is too early to tell whether these steps will ultimately enhance the agency’s ability to select establishments to inspect and increase the number of foreign establishments inspected. To date, FDA’s programs for inspections by accredited third parties have not assisted FDA in meeting its regulatory responsibilities nor have these programs provided a rapid or substantial increase in the number of inspections performed by these organizations, as originally intended. Recent statutory changes to the requirements for inspections by accredited third parties may encourage greater participation in these programs. However, the lack of meaningful progress in conducting inspections to this point raises questions about the practicality and effectiveness of these programs to help FDA conduct additional foreign inspections. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or the other Members of the subcommittee may have at this time. For further information about this statement, please contact Marcia Crosse at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may found on the last page of this statement. Geraldine Redican-Bigott, Assistant Director; Kristen Joan Anderson; Katherine Clark; William Hadley; Cathleen Hamann; Julian Klazkin; and Lisa Motley made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | As part of the Food and Drug Administration's (FDA) oversight of the safety and effectiveness of medical devices marketed in the United States, it inspects certain foreign and domestic establishments where these devices are manufactured. To help FDA address shortcomings in its inspection program, the Medical Device User Fee and Modernization Act of 2002 required FDA to accredit third parties to inspect certain establishments. In response, FDA has implemented two voluntary programs for that purpose. This statement is based primarily on GAO testimonies from January 2008 (GAO-08-428T) and April 2008 (GAO-08-701T). In this statement, GAO assesses (1) FDA's program for inspecting foreign establishments that manufacture medical devices for the U.S. market and (2) FDA's programs for third-party inspections of those establishments. For GAO's January and April 2008 testimonies, GAO interviewed FDA officials, analyzed information from FDA, and updated GAO's previous work on FDA's programs for inspections by accredited third parties. GAO updated selected information for this statement in early May 2008. FDA faces challenges managing its program to inspect foreign establishments that manufacture medical devices. GAO testified in January 2008 that two databases that provide FDA with information about foreign medical device establishments and the products they manufacture for the U.S. market contained inaccurate information about establishments subject to FDA inspection. In addition, comparisons between these databases--which could help produce a more accurate count--had to be done manually. Recent changes FDA made to its registration database could improve the accuracy of the count of establishments, but it is too soon to tell whether these and other changes will improve FDA's management of its foreign inspection program. Another challenge is that FDA conducts relatively few inspections of foreign establishments; officials estimated that the agency inspects foreign manufacturers of high-risk devices (such as pacemakers) every 6 years and foreign manufacturers of medium-risk devices (such as hearing aids) every 27 years. Finally, inspections of foreign manufacturers pose unique challenges to FDA, such as difficulties in recruiting investigators to travel to certain countries and in extending trips if the inspections uncovered problems. FDA is pursuing initiatives that could address some of these unique challenges, but it is unclear whether FDA's proposals will increase the frequency with which the agency inspects foreign establishments. Few inspections of foreign medical device manufacturing establishments have been conducted through FDA's two accredited third-party inspection programs--the Accredited Persons Inspection Program and the Pilot Multi-purpose Audit Program (PMAP). Under FDA's Accredited Persons Inspection Program, from March 11, 2004--the date when FDA first cleared an accredited organization to conduct independent inspections--through May 7, 2008, four inspections of foreign establishments had been conducted by accredited organizations. An incentive to participation in the program is the opportunity to reduce the number of inspections conducted to meet FDA's and other countries' requirements. Disincentives include bearing the cost for the inspection, particularly when the consequences of an inspection that otherwise might not occur in the near future could involve regulatory action. The Food and Drug Administration Amendments Act of 2007 made several changes to program eligibility requirements that could result in increased participation by manufacturers. PMAP was established on September 7, 2006, as a partnership between FDA and Canada's medical device regulatory agency and allows accredited organizations to conduct a single inspection to meet the regulatory requirements of both countries. As of May 7, 2008, two inspections of foreign establishments had been conducted by accredited organizations through this program. The small number of inspections completed to date by accredited third-party organizations raises questions about the practicality and effectiveness of these programs to quickly help FDA increase the number of foreign establishments inspected. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
To access the Internet, most residential users dial in to an ISP over a telephone line, although other physical means of access to the Internet— such as through a cable television line—are becoming increasingly common. For a residential customer, the ISP sends the user’s Internet traffic on to the backbone network. To perform this function, ISPs obtain direct connections to one or more Internet backbone providers. Small business users may also connect to a backbone network through an ISP, however, large businesses often purchase dedicated lines that connect directly to Internet backbone networks. An ISP’s traffic connects to a backbone provider’s network at a facility known as a “point of presence.” Backbone providers have points of presence in varied locations, although they concentrate these facilities in more densely-populated areas where Internet end users’ demands for access are greatest. If an ISP or end user is far from a point of presence, it is able to reach distant points of presence over telecommunications lines. Figure 1 depicts two hypothetical Internet backbone networks that link at interconnection points and take traffic to and from residential users through ISPs and directly from large business users. Once on an Internet backbone network, digital data signals that were split into separate pieces or “packets” at the transmission point are separately routed over the most efficient available pathway and reassembled at their destination point. The standards that specify most data transmissions are known as the Internet Protocol (IP) Suite. Under part of this protocol, streams of packets are routed to their destination over the most efficient pathway. Other aspects of the protocol facilitate the routing of packets to their appropriate destination by examining the 32-bit numeric identifier— or IP address—attached to every packet. Currently, IP addresses for North America are allocated by the American Registry for Internet Numbers (ARIN). There are many Internet backbone providers offering service in the United States. Boardwatch—an industry trade magazine—reports 41 backbone providers with a national network and many other regional backbones. Approximately five to eight of these national providers are considered to be “Tier 1” backbone providers. A Tier 1 provider is defined by Boardwatch as having a network of wide geographic scope, having a network with many IP addresses, having extensive information for traffic routing determinations, and handling a large percentage of transmissions. Unlike telecommunications services, the provision of Internet backbone service is not regulated by governmental communications agencies. Dating back to the 1960s when data signals began to flow over public telephone networks, FCC determined that “basic services”—the physical transport of data over telephone networks—would be regulated, but “enhanced services”—the data-processing or computer-enhanced functions of data transmissions—was a vibrant and competitive market that should remain free of regulation. Congress maintained this distinction when it enacted the Telecommunications Act of 1996, terming these services “telecommunications” and “information,” respectively. No provisions were contained in the 1996 act pertaining to Internet backbone services; rather, the act sought to increase competition in other communications sectors, primarily the local telephone market. However, the treatment of these more established communications services and infrastructures under the Communications Act of 1934—as amended by the 1996 act—has indirectly affected the burgeoning Internet medium. Additionally, the act provided FCC and states the authority to take actions to encourage the deployment of advanced telecommunications capability. Two types of facilities are used for the exchange of data traffic by interconnected Internet backbone providers. The first type of facility, known as a “network access point” (NAP), enables numerous backbone providers to interconnect with each other at a common facility for the exchange of data traffic. Internet data traffic is also exchanged by backbone providers at “private” interconnections. Independent of the type of facility at which backbone providers exchange traffic, two different types of financial arrangements exist among backbone providers for traffic exchanges. In a “peering” relationship, backbone providers exchange data destined only for each other’s network generally without the imposition of a fee. Transit payments, which involve the payment by one backbone provider to another for the mutual exchange of traffic and for the delivery of traffic to other providers, have become more common with time. A NAP facilitates the interconnection of multiple backbone providers. In the early to mid-1990s, the National Science Foundation designed and partially funded four NAPs, each of which was managed by a different company. Since that time, other interconnection points have been constructed, and for purposes of this report, the term NAPs refers to approximately 10 major traffic exchange points that host backbone providers. Managed by different companies, NAPs are not uniform facilities; differences exist in terms of equipment, software, and data transmission rates. Although most backbone providers we interviewed use the NAPs, a few providers voiced concerns about them. In the first years of their existence, NAPs became congested with the rapid rate of growth in Internet traffic. Two of the providers with whom we spoke said that some NAPs were not well managed. Also, originally some NAP technology was not “scalable”— that is, beyond some level, it was very costly to increase the amount of traffic that could be exchanged at a NAP. If traffic exchange at a NAP became congested, service quality could be compromised. Two typical problems that congestion causes include latency (delay in the transmission of traffic) and packet loss (when transmitted data are actually lost and never reach their destination). For example, one backbone provider told us that the loss of packets at some NAPs had sometimes reached 50 percent. The congestion and poor quality of connections at the NAPs led backbone providers to engage in another type of traffic exchange known as “private interconnection.” Private interconnection refers to the exchange of traffic at a place other than a NAP. Usually, these private interconnections involve two companies entering into a bilateral agreement to exchange traffic; no third party manages the traffic exchange. The parties interconnect their networks at any feasible location, such as a facility of one of the providers. Because of the private nature of these agreements, the number of private interconnections that currently exist across the United States, according to one company representative, is not known. Despite a variety of technological developments that have improved traffic flow at NAPs, we found that for the providers we interviewed, the majority of Internet traffic exchange occurs at private interconnection points. Of 17 backbone providers with whom we spoke, 15 used both NAPs and private interconnections; the remaining 2 used only private interconnections, avoiding the NAPs entirely. Slightly more than half of the 15 providers using both NAPs and private interconnection said they exchanged more than 80 percent of their traffic at private exchange points. Of the 17 companies that we met with, 10 provided estimates of how their mix of private interconnection and NAP use would likely change in the future. Nine of the 10 stated that they either plan less use of NAPs in the next few years or do not see their mix of NAPs and private interconnection changing; only one company said that it was likely to make greater use of NAPs in the future. We found that some Internet backbone providers value several features of NAPs. For example, when a company interconnects at a NAP, it saves on equipment costs and administrative overhead. Representatives of two companies with whom we spoke noted that the NAPs play an important role in helping to keep the market for backbone service open for entry, and thus more competitive, because NAPs provide new backbone firms an efficient, low-cost method for exchanging traffic with numerous other providers. When the commercial Internet began, only a few major backbone providers of relatively similar size existed, each of which sent and received roughly equal amounts of traffic. The similarities among these backbone firms led them to view each other as “peers.” These providers elected to exchange traffic for free, rather than trying to measure the actual traffic exchanged and developing a payment method. In a peering arrangement, two backbone providers agree to exchange traffic destined only for each others’ networks. As depicted in figure 2, the peering agreement between backbone provider A and backbone provider B only covers traffic going from A’s network to B’s network and vice versa. For backbone A to move traffic to backbone C’s network under peering, it must have a peering agreement directly with backbone C. By the mid to late-1990s, another financial arrangement known as “transit” emerged. Transit and peering are distinctive in two key respects. First, while peering generally entails traffic exchange between two providers without payment, transit entails payment by one provider to another for carrying traffic. Transit agreements thus constitute a supplier-customer relationship between some backbone providers, much like the relationship between a backbone provider and a nonbackbone customer (such as an ISP). Second, when a backbone provider buys transit from another provider, it obtains not only access to the “supplier’s” backbone network, but also access to any other backbone network with which its supplier peers. Regarding physical locations, however, both transit and peering take place at NAPs as well as at private interconnection points. Currently, there is a segregation of backbone providers into “tiers.” The top tier or “Tier 1” providers generally peer with each other and sell transit to smaller backbone providers. However, we found that smaller providers often peered with each other and were able, in some cases, to peer with larger providers. The illustration in figure 3 shows backbone provider C as a transit customer of backbone provider B and backbone providers B and A as peers. In this case, traffic originating on backbone C can get to backbone B’s network as well as to that of backbone A (with which backbone C does not have an independent relationship) because B will pass C’s traffic off to A as part of its delivery of transit service to C. Thus, a smaller backbone provider generally need only buy transit from one or two large providers to achieve universal connectivity. We found that it is generally not viewed as economical for a backbone provider to peer with a less geographically dispersed backbone provider. Thus, even if there were equal traffic flows, the larger provider will tend to carry traffic a further distance—which, according to a larger backbone provider we spoke with, ultimately means more costs are imposed on its infrastructure—when it peers with a provider with a smaller or less widely dispersed network. Figures 4 and 5 illustrate this paradigm. In figure 4, backbone providers A and B are of similar size, and traffic between the two could be carried mostly by one backbone provider in one direction, but mostly by the other in the opposite direction. In figure 5, backbone provider D is smaller than backbone provider C, with more limited points at which traffic can be brought onto the network. When backbones C and D exchange traffic, C must carry the traffic much farther on the return path before it can hand off the data packets to D. Therefore, C might consider D to be benefiting from C’s network investment and thus, C would be more likely to view D as a customer purchasing access to its network than as a peer in traffic exchange. The “tiering” of Internet backbone providers and the dual system of peering and transit agreements have caused controversies. Several of the non-Tier 1 backbone providers with whom we spoke expressed concerns about their inability to peer with the largest providers. In particular, we were told that the inability of non-Tier 1 providers to peer with Tier 1 providers puts smaller companies—which must therefore purchase transit service—at a competitive disadvantage. We were also told that peering policies should be made public. To some extent, market forces may be relieving some of these problems. First, despite the view that smaller providers have no choice but to buy transit, some backbone providers with whom we spoke stated that the market is competitive, and transit rates have been decreasing. Second, eight of the backbone providers with whom we spoke (some of which were Tier 1 providers and some of which were not) said they already had posted or soon would be posting their peering policies on their Web sites or otherwise making them publicly available. Perhaps most interesting, we found that some non-Tier 1 backbone providers do not want to peer with the largest backbone providers. For example, one provider spoke critically of the quality of peering connections and the quality of service provided between peers. Some stated that it is difficult to guarantee their own clients a certain level of service if they receive few guarantees themselves—a common occurrence under peering. Transit customers, however, do contract for a specified level of service for such items as “uptime”—the functioning of a network without impairment or failure. No official data sources were identified that would provide information on the structure and competitiveness of the Internet backbone market. Market participants we interviewed—Internet backbone providers, ISPs, and other end users—described the Internet backbone market as competitive. Several characteristics were described by market participants, such as increasing choice of providers and lower prices, as evidence of the competitiveness of the market. However, officials also described to us factors that may reduce competition in this market or cause other problems, such as the limited number of Tier 1 providers, the limited choice of providers in rural areas, the manner in which Internet addresses are assigned, and the lack of control or knowledge about the movement of traffic across backbone networks. We were also told that the choice of local telephone companies providing access to Internet backbone networks may be limited, creating problems for providers of Internet services. We found no official data source that could provide information to allow an empirical investigation of the nature of competition in the Internet backbone market. In particular, we found little in the way of official or complete information on the relative size of companies—even the largest companies—operating in the market. Neither FCC nor NTIA collect data on the provision of Internet backbone services. However, FCC does solicit public comments on the deployment of underlying telecommunications infrastructure that support backbone services for their report on advanced telecommunications capabilities under section 706 of the Telecommunications Act of 1996. DOJ often collects data for merger-specific analyses—as it did in two cases that involved an assessment of backbone assets—but such data are not publicly available. We also found that neither the Bureau of Labor Statistics nor the U.S. Census Bureau currently collects data directly on Internet backbone providers. In the case of both of these agencies, aggregate data on services provided by telecommunications providers is collected. To investigate the degree of competition, we spoke with an array of buyers and sellers of backbone connectivity and asked questions that were designed to provide information about the competitiveness of the market. For example, we asked questions about the availability of choice among providers in the market, the viability of purchasing transport to a distant location to connect to a backbone provider, the length of contracts for backbone connectivity, the types of service guarantees buyers receive from sellers, the ability of buyers to negotiate favorable contract terms, and the factors that were important to buyers when choosing a backbone provider. Representatives of ISPs and end users we interviewed throughout the country described the Internet backbone market as competitive. Most of these providers stated that they have several choices of backbone providers from which to obtain services. Although a few ISP representatives noted a relatively limited number of companies among the Tier 1 providers, they nonetheless considered the market to be competitive with greater choices across the entire range of backbone providers. Similarly, most non-Tier 1 backbone providers stated that they can purchase transit from a number of Tier 1 backbone providers. A few ISPs and other purchasers of backbone services also noted that the extensive choice of backbone providers enables them to engage in “multihoming”— purchasing backbone services from more than one provider—to provide redundant access that enhances ISPs’ assurances to customers of uninterrupted Internet connectivity. We found, based on our discussions with ISPs and other purchasers of backbone connectivity, that several characteristics of the market show evidence of its competitiveness. In particular: Many ISPs noted that, coincident with increased choice of backbone providers throughout the country, the price of backbone connectivity had declined significantly in recent years. Representatives of several companies told us that although they were presented with standard contracts by backbone providers, they were able to negotiate terms and conditions in their contracts that were important to them. A few ISP representatives with whom we met said they receive frequent sales calls from multiple backbone providers. An ISP representative noted that many backbone providers are working to increase the speed and decrease the latency of transmissions of their networks to improve their competitiveness in the market. Even though there have been bankruptcies and consolidation in this market, a few new backbone providers have entered the market in the recent past. Some backbone providers are filling market niches by offering customers additional or unique services to complement their backbone services. The majority of market participants with whom we spoke expressed the view that the Internet backbone market is competitive, if not highly competitive. At the same time, many of these respondents noted factors that might be reducing the level of competition or creating other problems in this market. In particular, we were told that (1) a small number of large backbone providers stand out as the premier providers, (2) choice among backbone providers may be more limited in rural areas, (3) ISPs are concerned about the way Internet addresses are assigned to users, and (4) ISPs and other end users are frustrated by their minimal control and understanding about how their traffic moves across Internet backbone networks. ISPs and other end users indicated to us a general perception that Tier 1 companies are “different” or superior when compared with other backbone providers. For example, 17 of the 24 ISPs and all 8 of the end users we interviewed purchase backbone connectivity from at least 1 of the 5 Tier 1 backbone providers identified in a recent FCC Working Paper. Similarly, 11 ISPs and 3 end users we interviewed explicitly stated that it was important to them to purchase service from a Tier 1 provider. Finally, many ISPs and end users stated that it was important to them to purchase backbone connectivity from a provider possessing certain network characteristics. Commonly cited characteristics of importance were a network with a broad geographic scope, many customers, significant capacity, and good peering arrangements with other providers. These are all common characteristics of Tier 1 backbone providers. Because Tier 1 providers are viewed as a special class of backbone providers, the existence of approximately 40 national backbone providers may not fully reveal the competitiveness of this market. Instead, it appears that only the 5 to 8 Tier 1 backbone providers are viewed as competitors for primary backbone connectivity. However, most of the ISPs and end users with whom we spoke nonetheless stated that the market is competitive and they have significant choice of provider. It appears that even if the “relevant” market for primary backbone connectivity is the Tier 1 providers, that market segment may be viewed as competitive. A remaining concern regarding the “tiered” segmentation of the market is the potential for the number of Tier 1 providers to decline or for one of these providers to become dominant. For example, the recent economic downturn in the communications sector may portend a further shakeout of backbone providers. Several of the company officials we interviewed expressed concern that there would be consolidation among the Tier 1 providers and thus noted the importance of antitrust oversight of this industry. Moreover, both an FCC Working Paper and the Antitrust Division of DOJ have noted that in industries such as the Internet backbone market, interconnection among carriers is critical to the quality of service consumers receive. As such, a much larger provider may have less incentive to have good interconnection quality with other providers because without quality interconnection, customers may have an incentive to buy service from the largest provider with the best-connected network. This would give the larger provider a competitive advantage, which in turn could cause the market to “tip”—that is, more and more users would choose connectivity from the larger network—risking a monopolization of the industry. Because of this concern, both agencies have noted that if one of the Tier 1 providers were to grow considerably larger than the rest, there could be competitive concerns. Members of Congress are often concerned about whether telecommunications services reach rural areas. Several representatives of companies we interviewed noted that there are less Internet backbone facilities running through rural areas and fewer points of presence in those areas. As such, purchasers of backbone connectivity in rural areas may have fewer choices among providers than their counterparts in more urban locations. One point made by two rural providers is that rural areas sometimes have subsidized networks (e.g., state networks or networks funded, in part, by governmental subsidy) that may actually discourage private backbone companies from entering and thriving in such markets. Despite the view that rural areas have fewer choices among backbone providers, most companies we interviewed in rural areas purchased “transport” services to connect to an Internet backbone network. That is, they were able to transmit their traffic over fiber lines, most often owned by one or more local telephone carriers, to a backbone provider’s point of presence that was perhaps hundreds of miles away. Eighteen of the 24 ISPs and 3 of the 8 end users we interviewed used transport from their location to another location for at least some of their Internet traffic. Sometimes transport was used to move data traffic to a nearby city that was not very far away—perhaps 30 to 50 miles. But in some cases—particularly for ISPs in rural areas—traffic was transported a few hundred miles to a point of presence of a backbone provider. The majority of officials from these companies told us that the quality of Internet service is not diminished by transporting traffic across such distances. Because many ISPs and end users told us that distant transport was a viable option for obtaining Internet backbone connectivity, even ISPs and users in more rural areas told us that they generally had choice among backbone providers that could receive traffic at varied distant locations. The one disadvantage of distant transport noted by several providers, however, was cost. Some company officials noted that it generally costs more to purchase transport to a distant location than it does to connect to a backbone at a local point of presence. Two companies specifically mentioned that they had or were planning to move their facilities to more urban locations because of the cost of distant transport. Several ISPs and end users with whom we spoke expressed concern about the manner in which Internet addresses are allocated. Most ISPs and other end users—except for fairly large organizations—do not directly obtain their own IP addresses, but they instead receive a block of IP addresses from a backbone provider. In particular, when an ISP obtains an Internet connection from a backbone provider, it also generally receives a block of IP addresses from among the addresses that are assigned by ARIN to that backbone provider. This method of IP address allocation was adopted for technical efficiency reasons—that is, allocations made in this manner reduce the number of addresses that need to be maintained for traffic routing purposes. (See app. II for detailed information on IP address allocations). While the method of allocating IP addresses in large blocks enables backbone routers to operate efficiently, some of the ISPs and end users with whom we spoke also told us that it makes it difficult for smaller entities to switch backbone providers. In particular, if an ISP were to change its backbone provider, it would generally have to relinquish its block of IP addresses and get a new block of addresses from the new backbone provider. Several ISPs and end users with whom we spoke told us that changing address space can be time consuming and costly. We found that the degree of difficulty in changing address space depends on how an individual company’s computer network is configured. Two respondents expressed concern about the loss of customers due to a change of IP addresses. A few also told us that it is not uncommon for an ISP to retain a relationship with its original backbone provider—paying for a minimal level of connectivity to that provider—in order to avoid having to go through a disruptive readdressing process. It appears, therefore, that customers’ feelings of being tied to a provider may lessen the effective level of competitiveness in this market. A concern among several market participants we interviewed was the difficulty of guaranteeing customers a given level of quality for Internet services. We were told that this difficulty is related to the way that the Internet is engineered. In particular, several of those with whom we spoke noted that Internet traffic is exchanged among providers on a “best efforts” basis—that is, Internet traffic is routed according to a set of protocols aimed at providing the best routing possible at a given time. However, the Internet was not engineered to enable extremely high quality service at all times—as are telephone networks—and the quality of Internet services can be compromised when high levels of traffic flow lead to congestion. Several of the market participants we interviewed were particularly concerned about their ability to understand where and why problems have occurred. These company representatives told us that when they contact their backbone provider to report service degradation they are sometimes told that the problem is with another interconnected backbone network. Because the Internet is a network of interconnected networks with little data available or reported on service disruptions or outages, finding the source, cause, or reason for a problem may be difficult. Thus, ISPs and end users expressed frustration that accountability for traffic transmission problems is lacking. Several ISPs noted, for example, that they receive service level guarantees from their backbone provider but that collecting remuneration for “downtime”—the time that a network has failed or otherwise is nonfunctional—is difficult because they are unable to prove that the problem occurred on their backbone provider’s network. One backbone provider with whom we spoke also noted that the quality problems inherent in the Internet lead some customers—particularly business clients—to purchase expensive private network services. One of the initiatives of the current and fifth Network Reliability and Interoperability Council (NRIC V) is a trial program for voluntary reporting of outages by providers not currently required to make such reports to FCC, such as Internet backbone providers. A focus group of the Council will evaluate the effectiveness of the program upon its completion and analysis of trial data, and it will make a recommendation on outage reporting of these networks. We were told that, due to concerns by some Internet providers about reporting network outages to a governmental agency, there was little participation in the program by Internet providers through the first half of 2001. Although the Internet backbone market appears to be competitive, another market that is essential to the functioning of the Internet may be less so. Most ISPs and other end users connect to a backbone provider’s point of presence through the local telecommunications infrastructure. These systems are typically owned and operated by incumbent telephone companies—those providing local telephone service prior to enactment of the 1996 act. Many of the market participants with whom we spoke noted that local telephone markets are, in their view, close to monopolistic; and some noted that several companies attempting to compete against incumbent local telephone carriers have recently gone out of business. Based on our interviews with market participants, it appears that a limited choice of local carriers may affect the providers of Internet services. In particular, interviewees stated that incumbent telephone carriers take a long time to provision or provide maintenance on special access services and other high speed access lines—which are often used to link businesses (such as an ISP) to an Internet backbone point of presence. Additionally, some companies we spoke with expressed concern about slow or limited deployment of high-speed Digital Subscriber Line (DSL) service in residential areas. Some backbone providers and ISPs said that these problems were more severe or more limiting in rural areas. For instance, we were told that rural areas are least likely to have competitors to the local carrier, and the incumbents were less likely to roll out DSL in their more rural markets. Incumbent local carriers, on the other hand, have stated that there is considerable competition in the provision of special access service. One such carrier with which we spoke noted that any delay in its own provisioning of these lines is due to the high expense of deploying the necessary infrastructure and to technical difficulties in rolling out DSL, especially in more rural areas. This carrier also noted that FCC found the percentage of all local lines served by competitors had doubled to approximately 8 percent in 2000. New Internet services, such as video streaming and voice telephone calls over the Internet, are expected to become increasingly common in the coming years. Both Internet backbone networks and local communications infrastructure must have sufficient bandwidth and technical capabilities to support such services. In response to problems of latency and packet loss associated with Internet transmissions, various initiatives and efforts are under way to make improvements in the functioning of the Internet and to build alternative networks that are more robust and reliable. We found that most of those with whom we spoke were optimistic that backbone capacity and technical features would adapt to new needs, but concern was expressed that limited broadband capabilities in local telephone markets could stall certain new applications. Incumbent local telephone companies have stated that the rollout of DSL service is hampered by the cost of reengineering parts of the network and existing regulations that require them to sell piece parts of their networks to competitors at cost-based rates. A variety of the company representatives with whom we spoke told us that new services and some services that were traditionally regulated (such as telephone calls) are expected to become more commonly provided over the Internet in the coming years. Many companies are developing technologies to enable voice services to be provided over IP networks. At present, however, many backbone networks are not well designed to provision such “time-sensitive” services. Specifically, real-time services such as IP telephony and interactive video require “bounded delays”—that is, these services require very low and uniform delays between sender and receiver in order for the service to be of adequate quality. Also, more broadband content is expected to be transmitted over the Internet. Before such broadband content can be provided, both the backbone and the local communications infrastructure must have sufficient bandwidth. Many industry representatives with whom we met told us that latency and the loss of data packets due to traffic congestion is a consequence of the current protocols for transmitting Internet traffic. As transmissions of time-sensitive applications over the Internet become increasingly common in the future, these problems may become particularly acute. A few of those we interviewed noted that these applications can run well across one backbone network, but when traffic must transverse across more than one network, quality cannot be assured given current routing protocols. We found that participants in Internet markets have begun to address latency and reliability problems in Internet backbone networks. For example: In addition to its experimental outage reporting initiative, NRIC V is in the process of evaluating and will report on the reliability of “packet- switched” networks. The council is also examining issues related to interconnection and peering of Internet backbone providers and the sufficiency of the best efforts standard for Internet transmissions as more time-sensitive services are provided over the Internet. Companies have emerged to build and provide services over networks that do not rely as much on traffic exchange across networks. For example, we found that a few providers are building and relying on private data networks—rather than the Internet—for the transmission of voice services. Similarly, some companies are building “virtual private networks”— networks configured within a public network for data transmissions that are secured via access control and encryption. Companies reduce reliance on backbone service—and thus increase transmission speed—by caching frequently used content on their servers. In addition, companies have emerged that specialize in caching frequently accessed content and storing it in varied geographic locations, thus making it more quickly accessible to customers. Because the Internet is not viewed as conducive to supporting research capabilities of high-speed technologies and other advanced functions, alternative methods for such research have emerged. For example, “Internet2” is a partnership of universities, industry, and government formed to support research and the development of new technologies and capabilities for future deployment within the Internet. According to many of the company officials we interviewed, there appears to be ample deployment of fiber optic cable in Internet backbone networks to support high bandwidth services. Similarly, we were told that capacity continues to be built by backbone providers and others and that backbone networks’ capacity will not be a bottleneck for the deployment of broadband applications. However, concerns were expressed to us that shortcomings in the local telephone market were likely to intensify in the future due, in part, to the increase in demand for broadband applications and content. We found that some companies are offering services to address this problem by attempting to bypass incumbent telephone companies’ facilities and bring services directly to customers. However, the majority of these efforts are focused on business customers in urban areas. For example, we found: Metropolitan fiber rings—fiber optic cables encircling central business districts of urban areas—are being constructed as an alternative to using incumbent carrier services. Business customers purchase a direct connection to the fiber ring, which is connected directly to the backbone point of presence. Wireless direct access is also becoming available that will enable a company’s data traffic to bypass local telecommunications infrastructures. While solutions such as these hold promise for greater choice for business customers in urban areas, market forces may not naturally address constraints in capacity of local telecommunications infrastructure in certain areas, particularly in rural, residential locations. Instead, representatives expressed concern that the deployment of broadband telephone facilities in residential and rural areas may not keep up with demand. Some of those we spoke with gave the example of limited DSL deployment in many areas. An incumbent local telephone provider we spoke with stated that they are aggressively rolling out DSL service, but that the service is costly to roll out and often requires significant reengineering of their networks. These providers also have noted publicly that DSL rollout is hampered by certain regulations that require incumbents to sell parts of their network (including DSL lines) to entrants at cost-based rates. Legislation is pending in the 107th Congress that would address these concerns, and proponents of this legislation have stated that this will advance the deployment of broadband in residential and rural areas. Opponents of the legislation believe the bill will not foster increased deployment of broadband services and may stifle competition in the local telephone market. Other bills have been introduced in Congress proposing various other approaches and strategies to accelerate the deployment of high-speed data services. In the 6 years since the federal government ended its sponsorship of a key backbone network, the Internet has changed the way people of the world live, work, and play. Its rapid growth is seen in the substantial investments made by private sector firms in backbone networks and interconnection facilities, by the proliferation of interactive applications and content, and by the exponential increase in the connectivity of end users. These developments are particularly noteworthy in light of the dynamic nature of the Internet backbone marketplace—Internet backbone providers not only compete with each other for customers but also cooperate for the exchange of traffic. The success of the Internet, as evidenced by its growth, evolution, diversity, and cooperative structure, has occurred with minimal government involvement or oversight. Despite the Internet’s success and the competitiveness of the Internet backbone market, several issues of concern regarding this market were raised to us during the course of our study. Market participants noted the importance of Tier 1 backbone providers and the potential for reduced competition if consolidation were to occur at the Tier 1-provider level. The inability of backbone customers to ascertain the causes of service degradation or traffic disruptions was also expressed to us, along with concerns about the adaptability of the Internet to new services. These and other concerns underscore the need for adequate information on such items as, for example, the geographic scope of backbone networks, the number of backbone providers’ customers, the number of IP addresses assigned to providers, traffic flows, and outages. In the absence of adequate information, it is difficult to fully ascertain the quality of service, the reasons for problems when they occur, and the extent of market concentration and competition in the Internet backbone market. The adaptability of backbone networks for new services, such as Internet- based voice and video services, foretell a trend commonly identified as “convergence” in the broader communications sector and the increasing importance of the Internet to the U.S. economy. This expectation of greater convergence was widely shared by the market participants we interviewed for this study and for other studies we have conducted at your request over the past 3 years. There is a strong expectation that traditionally regulated services—such as voice telephone and video services—are already migrating to the Internet and will soon become common applications used by residential and business Internet users. Moreover, advances in technology are changing the very nature of the Internet. In the last half decade, the Internet has evolved from a nascent but promising information tool to a 21st century medium central to commerce and communications for Americans and citizens the world over. The implications of convergence and greater future reliance on the Internet are at present largely unknown. No evidence came to light in the course of this study to suggest that the long-standing hands-off regulatory approach for the Internet has not worked or should be modified. Further, FCC said it believes that the appropriate means to collect information on Internet backbone networks at the present time is through informal and experimental efforts, which are currently under way. Because of the trend towards convergence in the communications marketplace and the nation’s increasing reliance on the Internet, however, FCC may need to periodically reassess its data collection efforts to evaluate whether they are providing sufficient information about key developments in this industry. FCC should develop a strategy for periodically evaluating whether existing informal and experimental methods of data collection are providing the information needed to monitor the essential characteristics and trends of the Internet backbone market and the potential effects of the convergence of communications services. If a more formal data collection program is deemed appropriate, FCC should exercise its authority to establish such a program. We provided a draft of this report to the FCC, NTIA of the Department of Commerce, and DOJ for their review and comment. FCC and NTIA officials stated that they were in general agreement with the facts presented in the report. Technical comments provided by FCC, NTIA and DOJ officials were incorporated in this report as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report for 14 days after the date of this letter. At that time, we will send copies to interested congressional committees, the Chairman, FCC; the Assistant Secretary of Commerce for Communications and Information, Department of Commerce; the Assistant Attorney General, Antitrust, DOJ; and other interested parties. We will also make copies available to others upon request. If you have any questions about this report, please call me at 202- 512-2834. Key contacts and major contributors to this report are listed in appendix IV. To obtain information about the characteristics and competitiveness of the Internet backbone market, the Chairman and the Ranking Member of the Subcommittee on Antitrust, Business Rights and Competition, Senate Committee on the Judiciary, asked us to report on (1) the physical structure and financial arrangements among Internet backbone providers, (2) the nature of competition in the Internet backbone market, and (3) how this market is likely to develop in the future. To respond to these objectives, we gathered information from a variety of sources, including government officials, industry participants, and academics familiar with the functioning of this market. We interviewed officials and obtained documents from the Federal Communications Commission, the Department of Justice, the National Telecommunications and Information Administration of the Department of Commerce, the National Science Foundation, the Bureau of Labor Statistics, and the Census Bureau. We also interviewed two national Internet industry trade associations and three academics with expertise in this area. To obtain information from a wide variety of participants within the Internet backbone market, we visited locations in 12 states with varying characteristics. We included large and small cities and rural areas from various regions of the country. Other criteria used for selection of areas were proximity to Internet points of presence, which are access points to the Internet, and proximity to network access points (NAP), which are points where Internet backbones interconnect. Also considered were the presence of other features, including regional backbone networks, statewide educational or government networks, state Internet Service Provider (ISP) associations, or Native American reservations. In the selected localities, we conducted 55 semistructured interviews with participants in the Internet backbone market between January and June 2001. For these interviews, we used interview guides containing questions concerning background information about the company, connectivity to backbone networks, business relationships in the backbone market, service quality issues, and views on competition in this market and on other public policy issues. We interviewed eighteen Internet backbone providers of varying size; two miscellaneous Internet companies that provide backbone-like twenty-four Internet service providers of varying size; eight end users of backbone services, including a college, a state government, corporations, and providers of content and Web hosting; two state-level ISP associations; one Internet equipment manufacturer; and one incumbent local telephone company. Responses from interviewees were evaluated and general themes were drawn from the aggregated responses and from the aggregated responses of relevant subsets of respondents. These themes are presented in this report. We contacted an additional 32 market participants and industry representatives for purposes of conducting interviews to support this study. In these instances, we were not able to schedule an interview. In some cases, our request for an interview was declined, our telephone contacts were not returned, or we were unable to schedule an interview after repeated discussions with company officials. In addition to the information collected through interviews, we also conducted technical, legal, and regulatory research on the characteristics and competitiveness of the Internet backbone market. Each individual network or node that is connected to the Internet is identified by an Internet Protocol (IP) address—a number that is typically written as four numbers separated by periods, such as 10.20.30.40 or 192.168.1.0. When information is sent from one network or node to another, the packet of information includes the destination IP address. Because the IP deals with inter-networking—the exchange of information between networks—the IP address is based on the concept of a network address and a host address that uniquely identifies a computer connected to the Internet. The network address indicates the network to which a computer is connected, and the host address identifies the specific computer on that network. Devices known as “routers” send data packets from one network to another by examining the destination IP address of each packet. In its memory, the router contains a “routing table” which contains information specifying all of the IP addresses of other networks. The router compares a packet’s destination IP address with the information contained in the routing table to determine the network to which the packet should be sent. In order to ensure that packets from one network can reach any other network, the router must include an entry for each possible network. As more and more network addresses come into use, there is concern about the growth in the number of routing tables entries. Historically, IP addresses were organized into three commonly used classes—Classes A, B, and C. For Class A, there are 126 possible network addresses, each with nearly 17 million hosts. Slightly more than 16,000 networks may have a Class B address, each with over 65,000 hosts. Finally, there can be approximately 2 million networks with a Class C address, each with a maximum of 254 host addresses. As the Internet grew, engineers quickly identified the problems associated with exhaustion of class B addresses and the increasing number of Class C address entries in routing tables and developed a solution known as Classless Inter-Domain Routing (CIDR). CIDR treats multiple contiguous Class C addresses as a single block that requires only one entry in a routing table. This method of IP address allocation was adopted for technical efficiency reasons—the number of IP addresses that must be maintained in each router for traffic routing purposes is substantially reduced. However, this method of IP address allocation presents unique problems for smaller ISPs and other entities. If an entity seeking IP addresses cannot utilize a large block of address issued by ARIN, the entity must obtain their addresses from among the allocations made by ARIN to their Internet backbone provider. ISPs and end users with whom we spoke expressed concern about method of IP address allocation. In addition to those named above, Naba Barkakati, John Karikari, Faye Morrison, Lynn Musser, Madhav Panwar, Ilga Semeiks, and Mindi Weisenbloom made key contributions to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to our home page and complete the easy-to-use electronic order form found under “To Order GAO Products.” Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: [email protected], or 1-800-424-5454 (automated answering system). | Although most Americans are familiar with Internet service providers that give consumers a pathway, or "on-ramp," to the Internet, few are familiar with Internet backbone providers and backbone networks. At the Internet's core are many high-capacity, long-haul "backbone" networks that route data traffic over long distances using high-speed fiber lines. Internet backbone providers compete in the marketplace and cooperate in the exchange of data traffic. The cooperative exchange of traffic among backbone providers is essential if the Internet is to remain a seamless and widely accessible public medium. Interconnection among Internet backbone providers varies both in terms of the physical structure and financial agreements of data traffic exchange. The physical structure of interconnection takes two forms: (1) the exchange of traffic among many backbone providers at a "network access point"--a common facility--and (2) the exchange of traffic between two or more backbone providers at "private" interconnection points. No publicly available data exist with which to evaluate competitiveness in the Internet backbone market. Evolution of this market is likely to be largely affected by two types of emerging services. First, demand is likely to rise for time-sensitive applications, such as Internet voice systems. Second, more "broadband"--bandwidth-sensitive--content, such as video, will likely flow over the Internet in the coming years. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
IRS administers America’s tax laws and collects the revenues that fund government operations and public services. In fiscal year 2006, IRS collected more than $2.5 trillion in revenue. IRS’s Taxpayer Service and Enforcement programs generate more than 96 percent of the total federal revenue collected for the U.S. government. Total federal revenues have fluctuated from roughly 16 to 21 percent of gross domestic product between 1962 and 2004. Given the amount of federal revenue collected by IRS, a disruption of IRS operations could have great impact on the U.S. economy. The IRS headquarters building is located in Washington, D.C., and houses over 2,200 of the agency’s estimated 104,000 employees. The headquarters building contains the offices of IRS executive leaders, such as the Commissioner and deputy commissioners, and headquarters personnel for 14 of the agency’s 17 individual business units. On June 25, 2006, the IRS headquarters building suffered flooding during a period of record rainfall and sustained extensive damage to its infrastructure. The subbasement and basement were flooded, and critical parts of the facility’s electrical and mechanical equipment were destroyed or heavily damaged. The subbasement—which contained equipment such as electrical transformers, electrical switchgears, and chillers—was submerged in more than 20 feet of water. In addition, the basement level— which housed the building’s fitness center, food service canteens, computer equipment, and the basement garage—was flooded with 5 feet of water. As a result of the flood damage, the building was closed until December 8, 2006. In response to the flood and the closure of the building, IRS headquarters officials reported activating several of the agency’s emergency operations plans. Over 2,000 employees normally assigned to the headquarters building were relocated to other facilities throughout the Washington, D.C., metropolitan area. Although the flood severely damaged the building and necessitated the relocation of IRS employees to alternate office space, particular circumstances limited potential damage and made response and recovery activities easier: No employees were injured, killed, or missing as a result of the flood. Damage was limited to the basement and subbasement levels, and employees were able to enter the building to retrieve equipment and assets 5 days following the flood. IRS and the General Services Administration were able to identify and allocate alternate work space to accommodate all displaced employees, not just those considered critical or essential. According to IRS status reports following the flood, facility space was provided for critical personnel within 10 days and for all headquarters employees within 29 days. Table 1 provides a time line of activities following the flood. The Treasury Inspector General for Tax Administration also reviewed the IRS response to the flooding. According to the Inspector General’s reports, IRS adequately protected sensitive data and restored computer operations to all employees approximately 1 month following the flood. In addition, he reported that the flood caused no measurable impact on tax administration because of the nature of the work performed at this building and the contingency plans that IRS had in place. Finally, he reported that IRS paid $4.2 million in salary costs for 101,000 hours of administrative leave granted to IRS personnel following the flooding. While $3 million was paid for administrative leave during the first week following the flooding, the amount paid for administrative leave decreased in subsequent weeks. IRS headquarters has multiple emergency operations plans that if activated, are intended to work in conjunction with each other during emergencies. These plans include a suite of business continuity plans comprised of, among others, a business resumption plan for each IRS business unit and an Incident Management Plan. In addition, IRS has a COOP plan for emergency events affecting IRS executive leadership and essential functions. Table 2 summarizes the IRS emergency operations plans and their purposes. FEMA developed FPC 65 to provide guidance to federal executive branch departments and agencies in developing contingency plans and programs to ensure the continuity of essential agency operations. All federal executive branch agencies are required to have such a capability in place to maintain essential government services across a wide range of all hazard emergencies. This guidance defines the elements of a viable continuity capability for agencies to address in developing their continuity plans. Table 3 summarizes eight general elements of federal continuity guidance that agency plans should address. IRS supplemented federal guidance with sections of its Internal Revenue Manual—a document outlining the agency’s organization, policies, and procedures—related to business resumption plans. Similar to the federal continuity guidance, the Internal Revenue Manual outlined minimum requirements for business resumption plans, including the need to identify people and resources to perform critical functions. The IRS headquarters emergency operations plans we reviewed collectively addressed several of the general elements of guidance identified in FPC 65. For example, the plans adequately identified the people needed to continue performing essential functions and had established procedures for activation. However, other elements were not addressed or were addressed only in part. Specifically, IRS identified two separate lists of essential functions—critical business processes and essential functions for IRS leadership—within its plans but only prioritized one of the lists. Furthermore, although the COOP plan outlined provisions for tests, training, and exercises, neither the business resumption plans we reviewed—from Criminal Investigation (CI), Wage and Investment (W&I), and Chief Counsel—nor the Incident Management Plan outlined the need to conduct such activities. While IRS’s Office of Physical Security and Emergency Preparedness provided overall guidance to business units on their business resumption plans, the guidance was inconsistent with the federal guidance on several elements, including the preparation of resources and facilities needed to support essential functions and requirements for regular tests, training, and exercises. Until IRS requires all of the plans that contribute to its ability to quickly resume essential functions to fully address federal guidance, it will lack assurance that it is adequately prepared to respond to the full range of potential disruptions. Inconsistencies between IRS’s business resumption plans and federal guidance can be attributed in part to gaps in IRS internal guidance. IRS provided its business units with guidance on developing business resumption plans, including general guidance within IRS’s Internal Revenue Manual and a business resumption plan template disseminated to the business units. The Internal Revenue Manual provided IRS business units with minimum requirements of elements to include in their plans, such as identifying critical personnel and resources. In addition, the Office of Physical Security and Emergency Preparedness disseminated a business resumption plan template to business units that included, among other things, sections for identifying the critical business processes and personnel to support the resumption of critical activities. IRS’s internal guidance addressed several of the elements of a viable continuity capability. For example, the Internal Revenue Manual stated that business resumption plans should include a list of critical personnel, and the business resumption plan template asked each business unit to list its critical team leaders and members and their contact information. Similarly, the IRS guidance adequately addressed execution and resumption. For other continuity planning elements, however, IRS guidance on developing business resumption plans was inconsistent with federal guidance. Specifically, IRS guidance on resources directed business units to identify their need for vital records, systems, and equipment. However, rather than procuring those resources before an event occurs, as outlined in federal guidelines, IRS guidance assumed that business units will work with teams outlined within the Incident Management Plan to acquire those resources following a disruption. Similarly, IRS directed business units to identify alternate work space requirements for personnel, but not to prepare or acquire them until after a disruption occurs. Finally, IRS guidance did not address the need for tests, training, or exercises involving the critical personnel identified within business resumption plans. Officials from the Office of Physical Security and Emergency Preparedness stated that it was the responsibility of business units to conduct adequate tests, training, and exercises of their business resumption plans. Officials further stated that the IRS response to the June 2006 flooding validated the use of its incident command structure outlined in its Incident Management Plan. Although the incident command structure can be effective at securing needed resources over time, IRS will be able to respond to a disruption more quickly if it prepares necessary resources and facilities before an event occurs. This is especially critical in the case of business processes that need to be restored within 24 to 36 hours. Similarly, if personnel are unfamiliar with emergency procedures because of inadequate training and exercises, the agency’s response to a disruption could be delayed. IRS officials largely relied upon the Incident Management Plan to direct their response to the emergency conditions created by the June 2006 flooding. This plan guided officials in establishing roles and responsibilities for command and control of the overall resumption effort and a capability for the procurement of alternate facility space and equipment. Business unit officials were initially guided by their business resumption plans, but later response activities differed from those plans because of the circumstances resulting from the event. According to IRS headquarters officials, the headquarters COOP plan was not activated because local space availability made moving the executive leadership to the alternate COOP facility unnecessary and the safety of the leadership was not at risk. We previously reported that in responding to emergencies, roles and responsibilities for leadership must be clearly defined and effectively communicated in order to facilitate rapid and effective decision making. The IRS Incident Management Plan provided agency officials with clear leadership roles and responsibilities for managing the response and recovery process, including the procurement of temporary facility space and equipment necessary to continue critical business processes. Consistent with the plan, the Incident Commander acted as the leader of IRS headquarters response and recovery activities immediately following the flood. To assist in managing the incident, the Incident Commander activated members of the IRS Incident Management Team and other supporting sections, whose roles and responsibilities were outlined in the plan. These individuals included business resumption team leaders from each of the IRS business units and personnel from the central service divisions, such as Real Estate and Facilities Management and Modernization and Information Technology Services. According to minutes from Incident Management Team meetings held in the days following the flood, the following Incident Management supporting teams were activated and provided the following contributions: 1. The Operations Section, responsible for conducting response and recovery activities, gathered information regarding the facility space and equipment requests from the IRS business units, as well as preferences on alternate work location assignments. 2. The Logistics Section, responsible for providing all nonfinancial logistical support, procured and allocated facility space and equipment to IRS business units. 3. The Planning Section, responsible for providing documentation of the emergency, documented decisions and conducted reporting. For example, the Planning Section prepared documents for hearings and maintained relocation schedules and information. 4. The Finance and Administrative Section, responsible for providing all financial support, provided assistance in monitoring agency costs and developing travel and leave policies. According to IRS status reports following the flood, facility space was provided for critical personnel within 10 days and for all headquarters employees within 29 days. The Incident Commander reported that the Incident Management Team and its supporting units stepped down approximately 2 months after the flood. The three business units we reviewed reported that their business resumption plans guided their initial responses to the flood. In later phases of their responses, the business units differed from their plans to account for conditions at the time, such as current work priorities and the availability of alternate office space for more staff than the minimum necessary to perform the most critical functions. The following sections outline how selected business units relied on their business resumption plans when responding to the flood. CI used its business resumption plan to (1) establish an internal command structure to coordinate emergency activities following the flood and (2) identify short-term facility space for selected employees. According to the CI business resumption executive, the business unit used alternate facilities previously identified within the CI business resumption plan to relocate personnel within the first 2 days. CI leadership determined which personnel would be placed first and at what locations, since its business unit’s resumption plan did not specify such information. According to the CI business resumption executive, after learning from the Incident Commander that relocation would be for a longer period and that alternate facility space was available to accommodate all displaced CI employees, CI officials submitted a request for facility space and equipment for all of their employees to the Incident Commander and Incident Management Team. In discussing lessons learned, the CI business resumption executive acknowledged that the unit’s plan primarily addressed relocation to alternate facilities for short-term emergencies rather than longer-term events like the flood, and that CI should work with IRS’s central organizations to better plan for relocation in such situations. Furthermore, the executive stated that better tests and exercises of the CI plan could assist in better preparing for a wider range of future emergencies. W&I officials used their plan to identify and prioritize critical tasks. W&I managers gathered at a previously scheduled off-site retreat the morning following the flood and conducted a review of the business unit’s resumption plan, according to the new W&I business resumption executive. The executive stated that the activity was particularly useful in addressing identified knowledge gaps in the wake of the prior W&I business resumption leader’s sudden death the day before the flood. Critical business processes and supporting tasks, initially prioritized within the plan, were adjusted to reflect the criticality of several tasks at that time of year. According to the business resumption executive, the revised list of critical business processes allowed W&I managers to identify critical personnel and resources, which were submitted to the Incident Management Team as facility space and resource requests. In addition, the executive stated that W&I managers established a system for placing employees in alternate work space based on their association with the prioritized tasks, although it was not reflected in the W&I business resumption plan. W&I created a document to capture lessons learned following the flood and established an internal business resumption working group to ensure a business resumption capability in all W&I field offices. As W&I officials did not anticipate the need to readjust tasks, one item discussed in the document addressed the need to create a rolling list of critical business processes and critical personnel, as processes and tasks will vary throughout the year. In addition, the W&I business resumption working group developed minimum requirements for all W&I plans and conducted a gap analysis of field office plans to identify areas for improvement. According to the W&I business resumption executive, the working group will conduct a training session for field office business resumption coordinators after the 2007 filing season. Although the Chief Counsel resumption efforts were led by people identified within its plan, the unit’s business resumption officials reported that use of the plan was limited because of the high-level content of the document. According to the Chief Counsel’s business resumption executive, the plan was written at a high level because it was expected that specific priorities would be determined by the active caseload at the time of the emergency. The executive stated that following the flood, Chief Counsel prioritized resumption activities based on the active caseload and the need to address emerging requirements, such as (1) ensuring that mail addressed to the business unit’s processing division was rerouted and processed at another facility and (2) supporting a specific court case being conducted in New York City because of its level of criticality and time sensitivity. The executive further stated that officials identified alternate work space in Chief Counsel offices in the Washington, D.C., metropolitan area and placed approximately 180 employees prioritized based on the organizational hierarchy. Chief Counsel submitted requests to the Incident Commander and Incident Management Team for facility space and resources for over 500 remaining employees. Although Chief Counsel was able to identify tasks, such as tax litigation, that were consistent with responsibilities outlined in its plan and procured facility space and resources for personnel, it established a task force that identified recommendations to improve the business unit’s plan in a report documenting lessons learned following the flood. Recommendations included measures to improve the prioritization of critical functions and people and outline provisions for mail processing. In addition, because Chief Counsel experienced delays in recovering a computer server that had not been identified in the business resumption plan but proved to be important following the flood, the task force addressed the need to ensure redundancy of information technology equipment. Chief Counsel is currently drafting an action plan to carry out the recommendations of the task force. In addition, a Chief Counsel business resumption official stated that agencywide tests and exercises of business resumption plans could assist in better integration of emergency efforts for a wider range of future emergencies. According to IRS headquarters officials, the headquarters COOP plan was not activated because local space availability made movement of executive leadership to the alternate COOP facility unnecessary and the safety of the leadership was not at risk. When the June 2006 flood occurred at the IRS headquarters building, the agency had in place a suite of emergency plans that helped guide its response. The agency’s Incident Management Plan was particularly useful in establishing clear lines of authority and communications, conditions that we have previously reported to be critical to an effective emergency response. Unit-level business resumption plans we reviewed contributed to a lesser extent and the headquarters COOP plan was not activated because of conditions particular to the 2006 flood. Specifically, damage to the building was limited to the basement and subbasement levels, and employees were able to enter the building to retrieve equipment and assets. In addition, alternate work space was available for all employees within a relatively short period, reducing the importance of identifying critical personnel. Such conditions, however, may not be present during future disruptions. The plans IRS had in place at the time of the flood did not address all of the elements outlined in federal continuity guidance. In particular, the IRS plans did not (1) prioritize all essential functions and set targets for recovery times; (2) outline the preparation of resources and alternate facilities necessary to perform those functions; and (3) develop provisions for tests, training, and exercises of all of its plans. In discussions on lessons learned from the flood response, IRS business unit officials recognized the need to incorporate many of these elements. Unless IRS addresses these gaps, it will have limited assurance that it will be prepared to continue essential functions following a disruption more severe than the 2006 flood. To strengthen the ability of IRS to respond to the full range of potential disruptions to essential operations, we are making two recommendations to the Commissioner of Internal Revenue: Revise IRS internal emergency planning guidance to fully reflect federal guidance on the elements of a viable continuity capability, including the identification and prioritization of essential functions; the preparation of necessary resources and alternate facilities; and the regular completion of tests, training, and exercises of continuity capabilities. Revise IRS emergency plans in accordance with the new internal guidance. The Commissioner of Internal Revenue provided comments on a draft of this report in a March 26, 2007, letter which is reprinted in appendix II. The Commissioner agreed with our recommendations. His letter notes that the agency is actively committed to improving its processes. Specifically the agency will (1) conduct a thorough gap analysis between FPC 65 elements and business continuity planning guidance; (2) update the Internal Revenue Manual guidance and business resumption plan templates to reflect areas of improvement resulting from the gap analysis; and (3) formally direct annual tests, training, and exercises of business resumption plans through the agency’s Emergency Management and Preparedness Steering Committee. Finally, the Commissioner stated that the agency will revise and implement its emergency plans based on the results of the aforementioned activities. As agreed with your staff, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. At that time, we will send copies of this report to the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. This report will also be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have questions on matters discussed in this report, please contact Bernice Steinhardt at (202) 512-6543 or [email protected], or Linda Koontz at (202) 512-6240 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributions to this report were made by William Doherty, Assistant Director; James R. Sweetman, Jr., Assistant Director; Thomas Beall; Michaela Brown; Terrell Dorn; Nick Marinos; and Nhi Nguyen. The objectives of this report were to evaluate how the Internal Revenue Service’s (IRS) emergency operations plans address federal guidance related to continuity planning and evaluate the extent to which IRS emergency operations plans contributed to the actions taken by IRS officials in response to the flood. To address how IRS emergency operations plans address federal guidance related to continuity planning, we obtained the IRS headquarters emergency operations plans that were available to agency officials at the time of the June 2006 flood. These included the Continuity of Operations (COOP) plan and a suite of business continuity plans, including the Incident Management Plan and 13 business resumption plans from business units affected by the flood. Although we also obtained the headquarters Occupant Emergency Plan, we did not evaluate its contributions to addressing the elements because its purpose is limited to outlining procedures for building occupants and emergency personnel in responding to threats that require building evacuations or shelter in place. We did not obtain the Disaster Recovery Plan, a contingency plan for the recovery of information technology equipment, because recovery of information technology equipment was addressed in a report from the Treasury Inspector General for Tax Administration. To evaluate IRS’s emergency operations plans in relation to federal guidance on continuity planning, we analyzed Federal Preparedness Circular (FPC) 65 to identify the elements needed to ensure the continuity of essential functions and compared IRS emergency operations plans to the resulting generalized list. Because FPC 65 covers all hazard emergencies, but provides continuity guidance specifically for agency COOP plans, we developed the general elements of guidance to be able to collectively evaluate all IRS emergency operations plans we obtained. From our analysis of FPC 65, we identified eight general elements of guidance related to developing a viable continuity capability. See table 3 for a listing and description of the elements. We reviewed IRS’s plans and analyzed how they collectively addressed or did not address these eight general elements of guidance. We also reviewed IRS-defined criteria and guidance for emergency operations plans, including sections of the Internal Revenue Manual— which provides guidance to IRS officials on developing several of the agency’s emergency operations plans—and an internal template provided by IRS’s Office of Physical Security and Emergency Preparedness, which is responsible for agencywide emergency planning and policy to guide plan development. Since each business unit within IRS headquarters has an individual plan for business resumption activities, we selected and examined 3 of 13 business resumption plans available for use during the flood from the 3 business units with the most employees affected by the flooding in the headquarters building. According to employee relocation lists from IRS following the flood, the 3 largest business units in the building are Criminal Investigation, Wage and Investment, and Chief Counsel, which collectively represent over 50 percent of the headquarters building employees. To address the extent to which IRS emergency operations plans contributed to the actions taken by IRS officials in response to the flood, we interviewed IRS officials responsible for the development, oversight, and implementation of the headquarters emergency operations plans. In our interviews, we asked IRS officials responsible for each emergency operations plan how the general elements identified in their respective plans guided their actions following the flood, if at all. To supplement the information gained from the interviews, we reviewed agency documentation related to emergency operations activities following the flood, including IRS status reports, employee relocation lists, and emergency operations team meeting minutes. In addition, we reviewed documentation regarding lessons learned from the flood, provided by various headquarters business units, and obtained any updates or changes to emergency operations plans following the flood. We conducted our review in accordance with generally accepted government auditing standards from July 2006 through March 2007. | On June 25, 2006, the Internal Revenue Service (IRS) headquarters building suffered flooding during a period of record rainfall and sustained extensive damage to its infrastructure. IRS officials ordered the closure of the building until December 2006 to allow for repairs to be completed. IRS headquarters officials reported activating several of the agency's emergency operations plans. Within 1 month of the flood, over 2,000 employees normally assigned to the headquarters building were relocated to other facilities throughout the Washington, D.C., metropolitan area. GAO was asked to report on (1) how IRS emergency operations plans address federal guidance related to continuity planning and (2) the extent to which IRS emergency operations plans contributed to the actions taken by IRS officials in response to the flood. To address these objectives, GAO analyzed federal continuity guidance, reviewed IRS emergency plans, and interviewed IRS officials. The IRS headquarters emergency operations plans that GAO reviewed--the headquarters Continuity of Operations (COOP) plan, Incident Management Plan, and three selected business resumption plans--collectively addressed several of the general elements identified within federal continuity guidance for all executive branch departments and agencies. For example, the plans adequately identified the people needed to continue performing essential functions. However, other elements were not addressed or were addressed only in part. Specifically, IRS had two separate lists of essential functions--critical business processes and essential functions for IRS leadership--within its plans, but prioritized only one of the lists. Furthermore, although the COOP plan outlined provisions for tests, training, and exercises, none of the other plans GAO reviewed outlined the need to conduct such activities. While IRS provided overall guidance to its business units on their business resumption plans, the guidance was inconsistent with the federal guidance on several elements, including the preparation of resources and facilities needed to support essential functions and requirements for regular tests, training, and exercises. The IRS Incident Management Plan was particularly useful in establishing clear lines of authority and communications in response to the flooding. Unit-level business resumption plans GAO reviewed contributed to a lesser extent, and the headquarters COOP plan was not activated because of conditions particular to the 2006 flood. Specifically, damage to the building was limited to the basement and subbasement levels, and employees were able to enter the building to retrieve equipment and assets. In addition, alternate work space was available for all employees within a relatively short period, reducing the importance of identifying critical personnel. While its plans helped guide IRS's response to the conditions that resulted from the flood, in more severe emergency events, conditions could be less favorable to recovery. Consequently, unless IRS fills in gaps in its guidance and plans, it lacks assurance that the agency is adequately prepared to respond to the full range of potential disruptions. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Foodborne illness in the United States is extensive and costly. Estimates of the incidence of foodborne illness range from 6.5 million to 81 million cases each year and result in 500 to 9,100 deaths. These illnesses cost the nation between $7 billion and $37 billion annually in medical and productivity losses. Multiple agencies share the responsibility for regulating food safety in the United States—12 different federal agencies in six federal entities are involved. Our past reviews have shown inconsistencies and differences between agencies’ approaches and enforcement authorities that undercut overall efforts to ensure a safe food supply. As such, we have recommended implementing a uniform, risk-based inspection system and a single food safety agency to help correct the problems created by this fragmented system. Fragmentation is not unique to the United States. Food safety officials in each of the four countries we visited maintained that similar fragmentation existed in their systems prior to consolidation. For example, before consolidation, the Danish food-processing sector encompassed seven laws, about 125 regulations, and more than 30 federal agencies and local offices overseeing food safety activities. Prior to Ireland’s consolidation, more than 50 agencies shared food safety responsibilities. The four countries have recently completed or are still in the process of consolidating their activities. Specifically: Canada decided in 1996 to consolidate its food inspection activities into a single new agency. The new agency—the Canadian Food Inspection Agency—officially began operations in April 1997. However, the responsibility for setting health standards for food safety remained with Health Canada. Denmark phased in the consolidation of its food safety activities, beginning in 1995, by combining the Ministry of Agriculture and the Ministry of Fisheries into a single ministry—the Ministry of Agriculture and Fisheries. In December 1996, Denmark moved the food safety inspections conducted by its Health Ministry into the Ministry of Agriculture and Fisheries The new consolidated agency is called the Ministry of Food, Agriculture, and Fisheries. The district and local inspection offices of the old ministries are being reorganized into 11 regional inspection offices within the new Ministry. Once this reorganization takes place, the consolidation will be complete. Great Britain is in the process of consolidating its food safety activities. In September 1997, the Ministry of Agriculture, Fisheries and Food and the Department of Health set up a work group composed of staff from both agencies—known as the Joint Food Safety and Standards Group—to plan the consolidation. In January 1998, the government formally proposed consolidating all food safety responsibilities under a new agency to be known as the Food Standards Agency. Final legislation establishing this agency had not been enacted as of January 1999. However, the British government expects Parliament to act on the new agency this year. Ireland approved the consolidation all of its food safety responsibilities under the umbrella of the Food Safety Authority of Ireland in July 1998. This Authority officially assumed its responsibilities in January 1999. The three European countries that we visited are members of the European Union and thus in some instances must follow Union directives.The European Commission, which is a regulatory body of the Union, recently made organizational changes that emphasized consumer protection in its food safety policy. The Commission has brought together certain responsibilities for consumer protection and public health for food into a single organization—Directorate General XXIV—which reports to the Commissioner for Consumer Policy and Health Protection. In addition, Directorate General XXIV is responsible for the relevant scientific committees in the food safety area. According to several of the European food safety officials with whom we met, these changes have made it easier for them to consolidate food safety responsibilities and to reorient newly consolidated agencies toward consumer protection. The four countries we visited had different reasons for consolidating their food safety activities, and therefore their approaches to reorganizing food safety responsibilities also differed. All four countries are incurring short-term costs while expecting long-term benefits. None had developed performance measures and data early in the process to assess the effectiveness of their new systems. The decisions to consolidate food safety responsibilities in the four countries we examined were based on each country’s recent food safety history and economic considerations, among other things. Great Britain and Ireland chose or plan to have their newly consolidated food safety activities report to their ministers for health. In Canada, the new agency reports directly to the Minister for Agriculture. Denmark combined food safety activities with agricultural and fisheries activities, creating a new ministry. According to British food safety stakeholders, the British plan to consolidate food safety activities into a single agency—the Food Standards Agency in the Department of Health—was largely a result of the government’s perceived mishandling of the Bovine Spongiform Encephalopathy (BSE) outbreak. In Great Britain, as of February 1, 1999, the BSE outbreak has resulted in 35 human deaths from a new variant of Creutzfeldt-Jakob disease and has hurt the country’s cattle industry: 3.7 million of Britain’s 10 million cattle had to be destroyed. BSE has also had an adverse impact on the British beef export industry because the European Union banned the trading of British beef among member nations. Cattle producers have suffered large losses in the value of their animals because of depressed markets. Other industries affected by the BSE outbreak include slaughterhouses, auctioneers, truckers, and beef export firms. According to several food safety stakeholders, it was widely perceived that the fragmented and decentralized food safety system—divided between several central government departments, such as the Ministry of Agriculture, Fisheries and Food and the Department of Health, as well as local authorities—allowed this outbreak to occur. Some of the stakeholders were particularly concerned with the Ministry of Agriculture, Fisheries and Food’s dual responsibilities to promote agriculture and the food industry as well as to regulate food safety. Consequently, during the 1997 election campaign for Parliament, the then candidate, and now prime minister, called for consolidating food safety responsibilities and for greater openness in the decision-making process about food safety. The public also demanded the consolidation of the food safety system as well as its removal from the Ministry of Agriculture, Fisheries and Food. As of January 1999, British food safety officials and other stakeholders remained committed to consolidating all activities related to food, including, among other things, the management of nutrition, food safety, chemical, and other additives, genetically modified organisms, and meat hygiene and dairy inspections. The consolidated agency will report to the Secretary of State for Health—the cabinet minister responsible for health. The enactment of the Food Standards Agency’s enabling legislation has been delayed while budgetary and other issues are being addressed. In the interim, the government established the Joint Food Safety and Standards Group, which is jointly managed by the Ministry of Agriculture, Fisheries and Food and the Department of Health. Similarly in Ireland, outbreaks of foodborne illness and the potential economic consequences of real or perceived unsafe food products provided the impetus for the consolidation of food safety responsibilities into a single agency in July 1998—the Food Safety Authority of Ireland. Irish food safety officials said that a succession of high-profile outbreaks of foodborne illnesses throughout the world, such as the BSE outbreak in Great Britain and the E.coli outbreak in Scotland, shook consumer confidence in the safety of food and in the ability of regulatory agencies to protect the public. In 1998, roughly 80 head of Irish cattle—out of about 7 million head in total—were found to be infected with BSE. These developments signaled not only a public health concern but also a potentially devastating economic problem because Ireland exports about 90 percent of the meat it produces. According to Irish food safety officials, these developments also served to highlight the difficulties that the Department of Agriculture and Food faced in trying to carry out the dual mission of protecting consumers and promoting the food industry. In July 1998, Ireland enacted legislation that (1) created the Food Safety Authority of Ireland, (2) made the Authority responsible for overseeing food safety activities, and (3) had the Authority report to the Minister of Health and Children. The legislation provided the Authority with the power to consolidate all food safety activities into a single agency. In exercising their new duties, as a first step, the Authority entered into service contracts with federal and local agencies to continue their food safety inspections and other activities. These contracts include mutually agreed-upon objectives and milestones. Preconsolidation funding arrangements were maintained. That is, the Parliament provides funds to agencies, which make resources available to fulfill their obligations with the Authority. According to Authority officials, if the service contracts are not satisfactorily performed, the Authority will initiate efforts to place all food safety activities under its direct control. The Authority received 6.5 million Irish pounds ($10 million in U.S. dollars) in its first year budget—1.5 million Irish pounds for start-up costs and 5.0 million Irish pounds for coordinating inspection services and new educational programs. The Authority took official charge of food safety on January 1, 1999. In contrast to Great Britain and Ireland, Canada initiated changes to its food safety activities to improve effectiveness and reduce costs. Canada did not face a loss of public confidence as did Great Britain and Ireland, but in the early 1990s, it faced a budgetary crisis and sought ways to reduce federal expenditures. By combining the various elements of its food inspection services, Canada expected to save about 13 percent of its food inspection budget, or $44 million Canadian per year, ($29 million in U.S. dollars) and improve the effectiveness of its inspection programs. In April 1997, the Canadian Food Inspection Agency began operations. While national food safety standards continue to be set by Health Canada—Canada’s Department of Health—all federal food inspections are the responsibility of the new food inspection agency, which is also responsible for animal and plant health inspections. The new agency has the status of a departmental corporation under the Financial Administration Act, which provides the agency with the authority to raise and retain funds from its activities. In addition, from the outset the agency has had “separate employer status,” which has enabled it to create its own personnel system. The new agency reports directly to the Minister of Agriculture and Agri-Food Canada—Canada’s department of agriculture. Denmark also launched changes to its food safety system to achieve greater efficiency and effectiveness. In 1996, at the request of food safety stakeholders, Denmark sought to strengthen its efforts with respect to food safety and food quality by consolidating food safety activities in the newly created Ministry of Food, Agriculture, and Fisheries. Denmark’s aims were to simplify food safety administration, control, and legislation, believing that such reforms would lead to a more efficient and effective food safety system and provide assurances of the quality of Danish food products, many of which are exported. For example, prior to consolidation, three Danish ministries—Health, Agriculture, and Fisheries, each with its own local food safety structure—shared responsibilities for implementing food safety laws. According to the Permanent Secretary of the new ministry, the results of this fragmented approach included extensive overlapping responsibilities in some areas and gaps in coverage in other areas; inconsistent food safety inspections; and inefficient use of food safety resources. The Permanent Secretary and all other stakeholders in Denmark believe that consolidating food safety responsibilities will address these concerns. Officials in the four countries we visited anticipated start-up costs with the consolidation of food safety activities. These costs are in addition to the ongoing operational costs. Specifically, the new, consolidated agencies require additional funding to establish a fully operational food safety system, including such overhead costs as computers and telephones. While the countries may have similar start-up activities, their costs cannot be compared because of differences in the size of the countries, the food safety activities that will be included in these new approaches, and the infrastructure already in place before these new efforts were launched. In Denmark, the start-up cost was about 120 million kroner ($18 million in U.S. dollars, or about 3 percent of Denmark’s food safety budget). Canadian officials estimated that their start-up costs were about $25 million Canadian dollars ($17 million in U.S. dollars, or about 7 percent of Canada’s food safety budget). British food safety officials estimate that Great Britain will spend 30 million pounds ($49 million in U.S. dollars, or about 25 percent of the British food safety budget) for start-up over a 3-year period. According to Irish food safety officials, the Food Safety Authority of Ireland’s start-up costs were about 1.5 million Irish pounds ($2 million in U.S. dollars). However, the start-up costs as a percentage of the total food safety budget could not be estimated. Canadian labor officials also noted less obvious costs, such as brief losses in productivity shortly before and immediately after consolidation. Over the long term, however, food safety stakeholders in all four of the countries we visited believe that the benefits of consolidating food safety activities will outweigh the additional costs. Through a more effective, streamlined approach, these officials believe, consolidated food safety agencies offer opportunities to enhance consumer protection and to improve working relationships with the food industry. Specifically, these officials believe that consolidating food safety activities would improve service delivery by providing a single contact for consumer and industry clients; reduce overlap and the duplication of services; improve or reduce the need to coordinate food safety activities, thereby enhancing the efficiency and effectiveness of food safety regulation; provide more comprehensive oversight of food safety from “farm to enhance food safety, thereby providing continued access to international markets for producers and processors. Food safety officials in all four countries said that their main priority to date has been to consolidate food safety activities. Nevertheless, they believe that evaluation is an important function. For example, in Canada, the legislation that created the Canadian Food Inspection Agency calls for performance measures. The Canadian inspection agency’s first business plan for 1999 acknowledges the need to establish measures to evaluate the agency’s performance. In Denmark, beginning in 1999, government officials said they plan to start using public health information, including foodborne illness data, to evaluate the effectiveness of the new organization. Great Britain has published a commitment to develop performance measures by the time the new agency begins operations. Ireland plans to introduce a system of monitoring its service contracts but has not yet determined when it would evaluate the Authority. Officials in the four countries identified several lessons that can be learned from their consolidation experiences. One of the most important is developing a consensus on the need to reform the food safety system. Other lessons learned include the importance of (1) strong leadership; (2) dedicated start-up groups; (3) additional start-up funding; (4) organizational flexibility; (5) personnel integration strategies; (6) open decision-making; and (7) evaluation criteria. These lessons are discussed below. Consensus on the Need for Change. Officials believe that achieving consolidation requires strong support among stakeholders. This includes agreement not only on the need for a new system but also on its scope and configuration. Each of the four countries began with a highly fragmented and decentralized system. While food safety officials in these countries believe that these decentralized approaches were less than optimal, some stakeholders hesitated to embrace the new consolidated agencies. In Great Britain and Ireland, the health and economic threats posed by outbreaks of foodborne illnesses served as strong incentives for change. Although Ireland has already begun functioning under its new food safety system, as of January 1999, Great Britain had not yet passed enabling legislation. Food safety officials anticipate parliamentary action during the current session. The delay in Great Britain is occurring not because consensus is lacking about the need for consolidation but because (1) it has been difficult for stakeholders to arrive at a consensus on how to fund the new system and (2) Parliament has been occupied with a full slate of other pressing matters, such as the reform of the House of Lords. Although the government favors imposing some forms of user fees on the food industry to help pay for the new system, industry and some consumer groups generally oppose such fees. Industry groups oppose them because of the additional costs they would add to production. The consumer groups with which we met oppose user fees because they fear such fees could lead to dependence on the food industry for funding and to a conflict of interest within the new food standards agency. In Denmark and Canada, concerns about program effectiveness and budgetary savings drove the changes. In Denmark, food industry and consumer groups requested the government to reorganize the food safety system. In a letter to the Prime Minister, these groups called for consolidating the food safety system to improve the effectiveness of inspections. They believed that an efficient food safety system would help Denmark maintain its reputation for high-quality, safe foods—especially for export markets. In Canada, officials said that initial support for a consolidated food safety approach was considerable, but food safety inspector unions presented some opposition in the hearings before Parliament. However, even this opposition faded once the new inspection agency became operational and its advantages became apparent. Strong leadership. Two of the four countries we visited relied upon strong leadership to get their new agencies started and to overcome initial bureaucratic opposition to change. In Ireland, a director was appointed early in the process to oversee the establishment of the Food Safety Authority of Ireland. The director has been able to work effectively with consumers, industry representatives, and government officials to establish the new agency’s agenda. The Authority gained credibility and support with the early appointment of its director, a well-known and respected medical doctor and veterinarian. Even before the new agency became fully operational, its director became Ireland’s spokesperson on food safety issues. In speaking out on these issues, the new director was viewed as advocating consumer protection while being fair to industry. In Denmark, the Permanent Secretary of the new ministry led the reorganization of its food safety system. As of January 1999, Great Britain had not yet had a single individual in charge of the proposed new agency. Canadian officials noted that the Canadian Food Inspection Agency’s first president was not appointed until late 1996, after many transition decisions had already been made. However, they also pointed out that each of the Canadian ministers involved in the transition provided strong leadership and support to the start-up group. Dedicated start-up groups. In Canada and Ireland, dedicated start-up groups helped ensure that the new agencies began operations in a timely fashion. For example, Canada recruited seven key officials from agencies that would be affected by the consolidation to lay the groundwork for the food inspection agency. Throughout the period leading up to the creation of Canada’s food inspection agency, the affected departments made a commitment to free these key officials from day-to-day operations so they could focus on creating the new agency. In addition, these key officials obtained the financial and human resources they needed because they had management’s full support. In Great Britain, the Ministry of Agriculture, Fisheries and Food and the Department of Health established the Joint Food Safety and Standards Group, which brought together those parts of the two agencies likely to form the core of a consolidated food safety agency. As of January 1999, Great Britain had resolved many of the major issues regarding the creation of the Food Standards Agency, except the funding issue described earlier. Denmark did not rely on a start-up group because it consolidated food safety responsibilities in phases. Additional start-up funding. Funding for start-up activities in the three countries that have consolidated their food safety activities was handled differently, but all three found that they needed additional funds. For example, these countries had to operate dual systems (while the new systems were being brought on line, the old system had to continue to operate); purchase new equipment and office space; and standardize procedures from the different agencies involved in the consolidation. Irish officials told us that one of the keys to their early success has been having adequate start-up funds available. By contrast, Canadian officials told us that they anticipated from the outset that they would have to request more start-up funds than the amount originally allocated. Indeed, they did request additional assistance from the Parliament. For example, the new inspection agency became legally responsible for its own staffing system on April 1, 1998, a year after the agency officially opened its doors. According to Canadian officials, the delay in creating the new personnel system occurred, at least in part, because they had underestimated the funding and expertise required to develop and implement such a system. In Denmark, officials also noted the importance of additional funding for consolidation. They said they could more effectively manage additional start-up funds by phasing in food safety consolidation over several years. Organizational flexibility. Food safety officials in all four countries said that the new agency should have sufficient organizational flexibility to shift resources to the areas of greatest risk. For example, the new Canadian and Danish agencies can move resources from one area to another, such as moving inspectors when risk assessments indicate such movements are needed. In Ireland, officials said that the service contract arrangement may impede the agencies’ ability to move resources to respond to new or increasing risks. As such, Irish food safety officials told us that they plan to evaluate the service contract arrangement in about 3 years. If, at the end of that time, the Authority believes that the service contract system does not adequately ensure the safety of the food supply, it would then move to control all food safety resources. Currently, within its service agreement context, the Authority can only request that other agencies shift their resource allocations. Personnel integration strategies. Integrating all of the new agency’s personnel into a new organizational culture is important to ensuring its success, according to officials in two of the four countries we visited. For example, Canada recognized the need to integrate its inspection staff and to develop a new and distinct organizational culture and identity. As a result, the new agency’s management devoted considerable time to obtain staff input on the agency’s core mission and values. These consultations continue today to assist mangers in charting the agency’s future. In Denmark, the new agency was formed from three separate ministries—agriculture, health, and fisheries—each with its own culture and procedures and with staff located in different offices. In order to successfully blend these disparate groups, Denmark adopted an incremental approach in which existing agencies were consolidated over a 4-year period ending in 1999. According to Danish officials, this strategy for integrating personnel has made the new food safety agency much more cohesive, and therefore more effective, than their previous system. Open decision-making. Openness in the new agencies’ decisions and decision-making process is essential in order to maintain consensus and public confidence, according to officials in three of the four countries visited. To achieve openness, the new agencies will bring consumer protection groups into the decision-making process and will publicize food safety concerns. This approach is significantly different from previous government practices. Historically, in Great Britain and Ireland, the public was not always informed about the bases for food safety decisions and the processes by which these decisions were reached. Denmark has a similar history regarding decisions on meat products. According to officials in these countries, this lack of openness has fueled public cynicism and mistrust, especially during outbreaks of foodborne illnesses. For example, during the BSE outbreak in Great Britain, consumer groups believed that the Ministry of Agriculture, Fisheries and Food withheld information from the public regarding the extent of the disease among the nation’s cattle herds as well as information on the severity of the BSE threat to humans. All three European countries have made commitments to have open and transparent decision-making processes in their new food safety agencies. Evaluation criteria. The ability to evaluate the new consolidated system should be built in as early as possible in a new agency’s development. Officials in three of the four countries believe that a new agency’s goals and the criteria for evaluating the agency’s progress towards these goals should be defined. Some of the criteria suggested for measuring the effectiveness of a new agency include (1) downward trends in the incidence of foodborne illnesses, (2) increases in the level of confidence the public has in the new agency, and (3) an a reduction in bacterial levels on food products, such as meat and poultry. However, no single criterion should be relied upon exclusively to measure the effectiveness of a new agency’s approaches; rather, a combination of measures should be used. In evaluating the Canadian Food Inspection Agency’s first year of operations, the Canadian Auditor General found that agency lacked specific performance expectations and therefore concluded that it was difficult to determine what the Agency was trying to achieve. As discussed earlier, both Canada and Denmark plan to begin developing criteria for and measuring the effectiveness of their consolidation efforts in 1999. We provided a draft of this report to the food safety agencies of Canada, Denmark, Great Britain, and Ireland. Overall, officials for these agencies commented that the draft report was accurate and useful. They also made a number of technical suggestions, which we incorporated as appropriate. We conducted our review from May 1998 through March 1999 in accordance with generally accepted government auditing standards. To identify foreign countries that could be changing their food safety responsibilities, we contacted the embassies of 16 foreign countries that officials in the Food and Drug Administration and the U.S. Department of Agriculture’s the Food Safety and Inspection Service and the Foreign Agriculture Service identified as possibly making changes in their food safety structures. To (1) determine the reasons for and approaches taken to make changes; the costs and savings, if any, associated with the organizational change; and efforts to assess the effectiveness of the new systems and (2) identify the lessons that the United States might learn from these countries’ experiences, we visited four countries that were making such changes—Canada, Denmark, Great Britain, and Ireland. We interviewed food safety officials as well as government officials in the ministries of health, agriculture, and treasury. We also interviewed representatives from the countries’ food industry, consumer groups, farmers, and government employee unions. Additionally, we reviewed each nation’s laws and regulations governing food safety, as well as documents concerning the consolidation of food safety responsibilities. See appendix V for a list of the organizations we met with in each country. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time we will send copies to congressional committees with jurisdiction over food safety issues; the Embassies of Canada, Denmark, Great Britain, and Ireland; Dan Glickman, the Secretary of Agriculture; Donna E. Shalala, the Secretary of Health and Human Services; and other interested parties. We will also make copies available to others on request. If you have any questions about this report please contact me at (202) 512-5138. Major contributors to this report are listed in appendix VI. Canada decided in 1996 to consolidate its food safety inspection functions. The Canadian Food Inspection Agency began operations in April 1997. Canada consolidated its food inspection activities largely in response to budgetary pressures and calls from government and industry to operate more efficiently and effectively. Prior to the creation of the new agency, three separate departments performed activities related to food safety, which lead to duplication and overlap in many areas. According to a 1998 report by the Canadian Auditor General, in the early 1990s, the government reviewed programs to determine more efficient and effective approaches to delivering government service. These reviews were conducted in large part because of serious deficit and debt problems, as well as changing public perceptions and expectations about the role and performance of government. Following these reviews, the government initiated changes in at least four areas, including food inspection, and began to introduce alternative ways of providing services. All of the Canadian food safety stakeholders with whom we met agreed that this emphasis on alternative ways of doing business helped address budget problems, improved program efficiency and effectiveness, reduced duplication and overlap, and helped create the right environment in which to consolidate responsibilities for food safety inspections. Prior to the creation of the new Canadian Food Inspection Agency, three departments—Agriculture and Agri-Food Canada, Health Canada, and the Department of Fisheries and Oceans—provided inspection and related services for food safety, animal and plant health, and agricultural inputs, that is, animal feed, seed, and fertilizer. Generally, Health Canada was responsible for ensuring the health and safety of all food in Canada. Health Canada typically evaluated and set standards for food safety, managed crises caused by outbreaks of foodborne illnesses, issued recalls, conducted domestic product inspections, investigated consumer complaints, and audited Agriculture and Agri-Food Canada’s and the Department of Fisheries and Oceans’ efforts to ensure compliance with food safety standards. Agriculture and Agri-Food Canada and the Department of Fisheries had health and safety responsibilities for the food products under their jurisdiction. For example, Agriculture set meat and poultry standards for international trade and domestic commerce; registered feed, seed, and fertilizer products; inspected imported and domestic products, such as meat, poultry, dairy, fruits, and vegetables; and reviewed the labeling and processing of products. Fisheries registered seafood establishments, which traded interprovincially and internationally (87 percent of fish products are exported), as well as these establishments’ suppliers. Fisheries also inspected exports and vessels as well as all mollusk and shellfish. Agriculture and Fisheries also promoted the use of the food products whose safety they regulate. Table I.1 shows the responsibilities under Canada’s previously decentralized food inspection system. The new system consolidates food inspection activities into one agency, thus integrating the delivery of inspection and quarantine services currently provided by Agriculture and Agri-Food Canada, Health Canada, and the Department of Fisheries and Oceans. The new inspection agency provides all inspection services related to food safety, economic fraud, trade-related requirements, and animal and plant health programs. Its primary responsibility is to enforce standards pertaining to food safety and animal and plant health. To accomplish this task, it, among other things, registers processing plants, inspects domestic and imported foods, certifies exports, and quarantines selected imported food products. The new agency’s role also includes identifying and evaluating risk management options; conducting assessments for animal and plant health; setting standards for trade and commerce; developing risk-based inspection systems; investigating outbreaks of foodborne illness; conducting enforcement actions; and coordinating emergency responses. The responsibilities for setting food safety standards, risk assessment, and analytical testing research and audit have been reinforced but remain with Health Canada. Table I.2 shows each organization’s new responsibilities in the current system. Table I.2: Canada’s Food Inspection System After Consolidation No role. No role. The Canadian Food Inspection Agency is a departmental corporation under the Financial Administration Act. The new agency is headed by a President, who reports to the Minister of Agriculture and Agri-Food. The minister, in turn, reports to Parliament on food safety inspection activities. The Minister of Health reports to Parliament on setting human food safety standards and policies and assessing the effectiveness of the new agency’s activities. The new agency, although supported largely through general tax revenues, has the power to raise and retain funds from its activities. For fiscal year 1998, the new agency had a budget of about $355 million Canadian dollars ($238 million in U.S. dollars), which included about $25 million in start-up funds and about $330 million ($221 million in U.S. dollars) in appropriations. Canadian officials expect the food safety agency’s budgets to decline over the next 3 years, from about $330 million this year to about $311 million ($208 million in U.S. dollars) in fiscal year 2000, about $304 million ($204 million in U.S. dollars) in fiscal year 2001, and about $299 million ($200 million in U.S. dollars) in fiscal year 2002. While most government departments are funded annually through parliamentary appropriations, the new agency has the authority to spend its annual appropriation over 24 months. According to officials of the new agency, this more flexible funding authority should allow the new agency to access funds over an extended period, providing for unplanned cash flows or other unforeseen expenses. Unlike most other federal departments, the new agency is allowed to raise a portion of its annual budget by assessing user fees. In 1998, the Canadian Auditor General estimated that in its first year of operation, the new agency raised about 12 percent of its budget through user fees. With respect to personnel, the agency’s enabling legislation created a management and accountability framework that provides the new agency with some flexibility to replace traditional departmental approaches. For example, the new agency received “separate employer status.” That is, it has been delegated the authority to establish its own human resource management system, negotiate collective bargaining agreements with unions, and establish terms and conditions of employment. (Traditionally such authority is the responsibility of the Treasury Board—roughly equivalent to a combination of the U.S. Department of the Treasury, the Office of Personnel Management, and the Office of Management and Budget). Thus, the new agency can work with unions to create a more flexible work environment, such as changing inspectors’ duty hours or work-site assignments. In its first year of operation, the agency reduced the number of union bargaining units from 19 to 4, thereby creating a more streamlined and efficient process. Under the new system, existing departments provided the full-time equivalent of 4,500 staff years to the new agency. Of the total, Agriculture and Agri-Food Canada contributed over 86 percent; Health Canada, just over 3 percent; and the Department of Fisheries and Oceans, about 9 percent. The new agency operates programs in all 10 provinces and the Canadian territories. It has about 4,200 regional staff; 185 field offices; 408 third-party establishments, such as slaughterhouses; and 22 laboratories and research facilities. It will have a regional structure with four centers of operation and 18 regions. Denmark phased in its operations over about 4 years, completing the consolidation in 1999. Its food safety system is housed in the new Ministry of Food, Agriculture, and Fisheries. Denmark reorganized its food safety responsibilities to address problems in the coordination and integration of services. Prior to 1995, at the beginning of the consolidation process, three Danish ministries shared responsibilities for implementing food safety laws. According to the Permanent Secretary of the new ministry, this fragmented approach resulted in extensive overlaps of responsibilities in some areas and gaps in coverage in other areas; inconsistent food safety inspections; and inefficient utilization of food safety resources. The chairman of the Danish National Academy of Sciences said that in 1995 the Academy reported that to improve the efficiency of food safety, Denmark’s food safety system needed to be reorganized. The goal of the proposed reorganization was to simplify food safety legislation, administration, and control, with the belief that such reforms would lead to a more efficient and effective food safety system, and provide assurances that the high quality of Danish food products would continue. The Danish Academy recommended that a new consolidated agency include all of the activities related to food safety and adopt a consumer protection orientation. The Academy also noted that by consolidating food safety activities, the government could (1) take advantage of new risk-based inspection schemes, such as the Hazard Analysis and Critical Control Point system; (2) move resources to areas presenting greater risk; (3) improve international and European Union interactions; and (4) improve the uniformity and consistency of local inspections. Finally, the Academy’s report concluded that the new agency should be based in the agriculture and fisheries ministry. In a May 1996 letter to the Prime Minister, representatives of Denmark’s consumers, farmers, and food industries also requested that the government reorganize the food safety system. In their letter, these stakeholders endorsed the concept of a consolidated food safety system to improve food inspections. Before consolidation, the three Danish ministries shared food safety responsibilities; each agency had its own headquarters and field staff. The Ministry of Health set food safety standards for local inspectors who inspected food processing plants, warehouses, and local retail stores. The Ministry of Agriculture was responsible for, among other things, inspecting meat and poultry processing plants. The Ministry of Fisheries was responsible for the safety of all fish and seafood, including fishing vessels and processing plants. Figure II.1 shows Denmark’s food safety system prior to consolidation. In 1995, the Danish government combined the Ministry of Agriculture and the Ministry of Fisheries and their respective responsibilities into a single ministry—the Ministry of Agriculture and Fisheries. In December 1996, the Danish government took a second step by moving the food safety activities of the Ministry of Health into the Ministry of Agriculture and Fisheries, and renamed the resulting organization the Ministry for Food, Agriculture, and Fisheries. The goal of this reorganization was to enhance the safety of food from its origins in the soil or sea to the table of the consumers. Danish food safety officials believe that with this approach there is a clear advantage for all sectors, including consumers, retailers, processors, farmers, and fishermen. In July 1997, under the Ministry of Food, Agriculture, and Fisheries, the Danish government organized food safety into three subunits—the Danish Veterinary and Food Administration, the Danish Plant Directorate, and the Danish Directorate for Fisheries. The Danish Veterinary and Food Administration is responsible for (1) ensuring that consumers have healthy food, including meats as well as fruits and vegetables; (2) protecting consumers against misinformation; (3) monitoring and controlling animal diseases that can be transferred to humans; (4) inspecting meat at all the processing plants; (5) ensuring the safety and quality of fish imports and exports; and (6) coordinating the activities of other food safety agencies. The Administration is also the controlling authority for veterinarians, animal medicines, and compliance with animal protection rules. As of January 1999, the Administration was continuing the last phase of the consolidation by reorganizing the local and district offices. The Danish Plant Directorate responsibilities include the quality of vegetable products, plant health, environmental regulations for agricultural production, and farmers’ subsidies. It inspects seeds and cereals, feed and fertilizers, fruit and vegetables, and other plant and forestry seeds. It uses sampling and laboratory tests as well as farm visits to inspect processing plants and farms. The Danish Directorate for Fisheries is responsible for, among other things, inspections of fresh waters and coastal areas, including fish farms. Figure II.2 shows Denmark’s food safety system after consolidation. The total budget for the Ministry of Food, Agriculture, and Fisheries for calendar year 1999 is about 13.5 billion kroner ($2 billion in U.S. dollars). This total represents around 4.75 billion kroner in appropriations and 8.7 billion kroner in European Union subsidies. The Ministry also received about 120 million kroner for start-up costs. Altogether, the Ministry of Food, Agriculture, and Fisheries has 4,952 staff. The Ministry has a central administrative staff of 195. The Danish Veterinary and Food Administration has 1,400 employees—435 in central offices, 795 at slaughterhouses, and 170 in laboratories, border inspection posts, and in the food safety units. The Plant Directorate has a staff of 510 located throughout the country. The Fisheries Directorate has a staff of 325—75 in central offices, 150 in land-based inspection offices, and 100 on vessels. The remaining 2,522 staff are engaged in various agricultural promotion programs and research activities. The final phase of the consolidation will bring about 520 employees from local food inspection units into the Veterinary and Food Administration. Danish food safety officials stated that they do not anticipate any savings in costs or personnel under the new organization. However, they believe that food safety inspections will be more consistent, resulting in a more efficient and effective food safety system. As of January 1999, the Food Standards Agency had not been formally established, but British food safety officials and other stakeholders said they remained committed to its creation. According to the Deputy Head of the Joint Food Safety and Standards Group, a draft bill to create the new agency was published for comment on January 27, 1999, and officials expect the Parliament to take action during the current legislative session. Loss of public confidence in Great Britain’s food safety system and acknowledged weaknesses that contributed to serious outbreaks of foodborne illnesses have made the creation of a new food safety system a government priority. Public confidence in the food safety system has eroded over the past 10 years in the face of several serious outbreaks of foodborne illnesses. Surveys of the British public in the mid-1990s showed that concern focused on four areas: the microbial safety of food, the chemical safety of food, the safety of genetically modified organisms and novel foods and processes, and the nutritional quality of the diet. Many public interest groups and the chairs of expert scientific committees, as well as companies in the food processing, producing, and retailing fields, believe that the current system has real failings. According to these experts and government studies, Great Britain’s food safety system is fragmented and lacks coordination among the different organizations involved in setting food policy and in monitoring and controlling food safety. That is, there are considerable overlaps and gaps between the Ministry of Agriculture, Fisheries and Food; the Department of Health; and the other departments dealing with food safety issues. Other concerns identified included (1) too many institutional barriers to promoting food safety at different points in the food chain; (2) a lack of a clear strategy and structure for monitoring the surveillance of chemical food safety, and (3) inconsistent enforcement of food safety laws throughout Great Britain. The fact that the Ministry of Agriculture, Fisheries and Food promotes the economic interests of the food industry while being charged with protecting public health was also identified as a serious shortcoming of the system. Inevitably, there were perceived conflicts between concerns for food safety and the economic interests of some industry sectors. These conflicts have been handled within the Ministry of Agriculture, Fisheries and Food and are often perceived to be conducted in semi-secrecy. Many food safety decisions have been met with widespread skepticism, if not suspicion, because of a perceived conflict of interest and the relative secrecy of deliberations. In September 1997, the Ministry of Agriculture, Fisheries and Food and the Department of Health set up the Joint Food Safety and Standards Group. This group brought together those parts of the two departments that are likely to form the operational core of a consolidated food safety agency. In January 1998, the government proposed consolidating food safety responsibilities in a new agency known as the Food Standards Agency. Despite widespread and continuing support for the proposed agency, Parliament did not enact enabling legislation for it during its 1997-98 session, because of, among other things, concerns over how to fund the new agency and difficulty in obtaining time on the legislative calendar. Nevertheless, the government has taken other steps to strengthen the handling of food safety issues, such as making a greater effort to ensure that information about food safety and human health is presented more clearly and more comprehensively to the public. The framework for most food legislation in Great Britain derives from the 1990 Food Safety Act, which brought together and updated all food legislation into one comprehensive document and implemented some European Union legislative requirements. In Great Britain, responsibility for food standards and food safety is divided among several national government departments, the environmental health and trading standards departments of local authorities, and a number of other bodies. Figure III.1 displays the key features of the current food safety structure in Great Britain. Prior to the creation of the Joint Food Safety and Standards Group in September 1997, the Ministry of Agriculture, Fisheries and Food was the lead department on food standards, the chemical safety of food, labeling, and food technology. Within the Ministry, various subunits at headquarters and in regional offices were responsible for specific aspects of food safety. The Veterinary Laboratories Agency provided the Ministry with advice on how animal health can affect human health. The Pesticides Safety Directorate was responsible for evaluating and approving the use of pesticides and implementing post-approval controls. The Central Science Laboratory provided a wide range of scientific services and scientific support to policy work. The Meat Hygiene Service, established in 1995, provided meat inspection services to licensed meat premises and enforced hygiene and welfare laws in slaughterhouses. The Veterinary Medicines Directorate evaluated and approved veterinary medicines and maintained surveillance of, and monitored suspected adverse reactions to, residues in meat and animal products. Furthermore, the Department of Health took the lead on issues of food hygiene, microbiological food safety, and nutrition. A number of subunits handled specific aspects of the Department’s responsibilities. For example, the Public Health Laboratory Service, in partnership with its regional offices and local environmental health departments, was responsible for most laboratory analysis concerning the microbiological safety of food. The Department of Health also had some enforcement responsibilities, although enforcement is predominantly a local function. Most of the food safety activities previously performed by the Ministry of Agriculture, Fisheries and Food and the Department of Health are now being carried out by the joint group. District and county councils are responsible for enforcing most food safety laws and regulations. Port health authorities and local environmental health departments have an enforcement responsibility with regard to enforcing food sanitation laws. The Trading Standards Departments—usually within County Councils—enforce food standards and labeling of food nutritional content. Coordination of local authorities’ enforcement of food issues is the responsibility of the Local Authorities Coordination Body on Food and Trading Standards, which provides advice and guidance for enforcement authorities and advises the central government on enforcement issues. This body also acts as the nation’s liaison for transborder food safety problems in the European Union. Under the proposed new system, the new Food Standards Agency will assume most of Great Britain’s food safety responsibilities and attempt to address past weaknesses. The Agency will be accountable to Parliament through the Secretary of State for Health. According to the government’s proposal, the new agency will be responsible for (1) formulating policy and advising the government on the need for legislation on all aspects of food safety and standards, as well as on certain aspects of nutrition; (2) providing information and educational material for the public on food matters; (3) working closely with government departments to protect the public, particularly in areas such as nutrition and farming practices; and (4) commissioning research and surveillance across the full range of its activities. As envisioned, the Food Standards Agency will assume the responsibilities of the Ministry of Agriculture, Fisheries and Food and the Department of Health in ensuring the safety of the whole food chain, from “farm to table.” The new Agency will have key roles at the farm level, with powers to prevent contaminated food from entering the food chain and to control animal diseases that could be passed through the food chain. At the processing level, the new agency will also have considerable authority because it will take over the Meat Hygiene Service and thus be responsible for inspecting and licensing fresh meat plants and for implementing measures to prevent the transmission of Bovine Spongiform Encephalopathy (BSE). At the consumer level, the Agency will have responsibility for all matters concerning food additives, chemical contaminants, and the labeling of food to ensure that consumers are not misled with regard to its content. Although the Food Standards Agency will not take over the existing enforcement responsibilities of local authorities and local outbreaks will continue to be managed locally, it will set standards for enforcement and will have the power to take action directly to protect the public. In addition, it will take a leading role in coordinating responses with central and local authorities in the event of a national food emergency, such as the recent BSE crisis. The Deputy Group Head of the Joint Group estimated the start-up cost for the new agency will be about 30 million pounds ($49 million in U.S. dollars) spread over a 3-year period, and the operating cost for the new agency will be about 120 million pounds ($196 million in U.S. dollars) annually. The local governments will spend 130 million pounds ($212 million in U.S. dollars) annually on food safety. While staffing levels have not yet been officially determined, the officials of the joint group estimate that about 500 staff will be employed at headquarters and about 1,700 in the Meat Hygiene Service. A governing body composed of a chairperson and no more than 12 independent members will run the new agency. Governing members are to be appointed on the basis of their professional reputation and expertise, bringing a broad balance of relevant skills, experience, and independence to the new agency. In addition, members are to act collectively in the public interest; rather than to represent any particular sector or interest group. Current plans call for a majority of the members to be drawn from public interest backgrounds. The governing body will be empowered to publish any of the advice it gives the government. In July 1998, the Irish government enacted legislation creating the Food Safety Authority of Ireland. The Authority assumed all responsibility for food safety in January 1999. For many years Irish consumers have been warned about the dangers of using unpasteurized milk, the need for proper hygiene in the home, and the necessity of proper cooking to ensure food safety. However, a succession of high-profile outbreaks of foodborne illnesses throughout the world, such as the Bovine Spongiform Encephalopathy outbreak in Great Britain, shook consumer confidence in the safety of food and in the ability of regulatory agencies to protect the public. In 1996, 21 elderly people died in a Scottish nursing home as a result of eating meat tainted with the E.coli O157 H:7 bacterium. Also in 1996, a European survey indicated that antibiotic residues in Irish pork were among the highest in Europe. These incidents undermined consumers’ confidence in the Irish food industry and in the Irish regulatory agencies; the public began to respond to vague reassurances with skepticism. According to Irish officials, these outbreaks also helped to highlight the difficulties that the Department of Agriculture and Food faced in trying to carry out its dual mission of protecting consumers and promoting the food industry. Consumers seemed to regard the incidence of foodborne illnesses as equally the fault of the government and the food industry. Avoiding serious outbreaks of foodborne illness and maintaining a strong food safety system are extremely important for Ireland’s economy for two reasons. First, according to officials in the Irish Department of Agriculture and Food, roughly 90 percent of the country’s food is produced for export. For example, Agriculture officials estimated that roughly 9 out of every 10 cows—worth about $3.7 billion—are exported annually. Agriculture officials feared that any serious outbreak of a foodborne illness could effectively close many export markets, thereby depriving Ireland of foreign trade. Furthermore, Agriculture officials were concerned that Irish exports could decline, even without a major outbreak, if trading partners lost confidence in the Irish food safety system and thus in the safety of Irish food. Second, Ireland’s economy also depends heavily on tourism. To the extent that outbreaks of foodborne illnesses, or the threat of outbreaks, dampen tourism, serious economic harm could follow. By early 1997, the Irish government believed that addressing the country’s food safety concerns could wait no longer. In addition to food related illnesses, other concerns, such as the availability of genetically modified food and food irradiation, caused concern among much of the Irish population. Irish food safety officials believed that any threat to the food supply, whether real or potential, required a response that was sufficient to calm domestic and international markets. The Irish government established a start-up group—the Food Safety Authority—on January 1, 1998, and enacted legislation to create the Authority on July 2, 1998. The Authority assumed full control of the food safety system on January 1, 1999. Prior to the creation of the Authority, food safety was fragmented across more than 50 entities, including 6 major government departments, 33 local authorities, and 8 regional health boards. The Department of Agriculture and Food inspects farms, slaughterhouses, and deboning and trimming halls for compliance with food processing regulations. Local and county governments, as well as the Ministers of Environment; Public Enterprise; Marine; and Trade, Enterprise, and Employment, have various other food safety responsibilities. For example, Ireland has eight Health Boards that have local food safety authority, such as inspecting retail and catering outlets as well as butcher shops and some food processing plants. According to Irish food safety officials, in 1996, the government established an interdepartmental committee to advise Parliament on how the various food safety agencies could be best coordinated. In early 1997, this committee recommended establishing the Food Safety Authority of Ireland as a “regulator of regulators.” That is, the responsibility for food safety would remain with existing agencies, but the Authority would audit these agencies and have a voice in setting and maintaining standards as well as in the promotion of good practices. However, a new government—elected in mid-1997—came to office believing that the Authority should be directly accountable for all food safety functions. The new government envisioned an Authority that would take over all functions related to food safety and food hygiene from existing agencies, providing consumers with protection from illnesses related to unsafe food. The Authority was to be independent and science-based and provide for full “farm-to-fork” traceability. Original plans for creating the Authority included transferring all relevant staff to the new agency. However, personnel issues precluded the wholesale transferring of staff to the new Authority. Roughly 2,000 staff, spread across the 50-plus agencies deliver food safety services throughout the country. It is common for such staff to have other duties, in addition to food safety responsibilities. Officials found it impossible under these circumstances to transfer “food safety” personnel to the Authority without disrupting other, sometimes unrelated, programs. The solution was to have the Authority and existing agencies enter into contractual agreements, called “service contracts,” which were to specify food safety activities. In October 1998, the Authority began negotiating the terms and conditions of these contracts with the existing agencies. According to Authority officials, the first contracts took effect in March 1999. The contracts include objectives that the Authority wants the agencies to meet, as well as the time frame within which they should be met. Current plans call for existing funding arrangements to be maintained, that is, agencies will continue to receive appropriations from the Parliament. Agencies are then expected to make sufficient resources available to meet service contract obligations. The Authority is to publish the details of the service contracts and introduce a system to monitor agencies’ compliance. If agencies do not satisfactorily perform their agreed-upon responsibilities, the Authority is to report to the Minister for Health and Children, which will arrange for such reports to be sent to Parliament. Although the Authority will also have an enforcement role, its main function will be to foster, through education and promotion, a food safety culture at all stages of the food chain, from production to final use by the consumer. One of the Authority’s key objectives is to bring about acceptance of the notion that the primary responsibility for food safety rests with the food industry and consumers, not the government. The Authority’s enforcement responsibilities are also to be carried out through service contracts with the departments of Agriculture and Health and include the inspection, approval, licensing, and registration of food premises and equipment and laboratory analysis. According to Authority officials, for fiscal year 1999, the Food Safety Authority of Ireland has a budget of about 6.5 million Irish pounds ($9 million in U.S. dollars). Of this amount about 1.5 million pounds ($2 million in U.S. dollars) was for start-up operations, and another 5.0 million pounds ($7 million in U.S. dollars) is for the coordination of inspection services and new educational programs. The Authority’s organizational structure includes the Board, a Scientific Committee, a Consultative Council, and a Chief Executive. The 10-member Board provides strategic direction for the Authority, acting as a forum in which the work of its various structural elements is harmonized. To help ensure that the Authority maintains a consumer protection focus, food industry representatives are precluded from serving on the Board. The Scientific Committee has 15 members, all appointed by the Minister for Health and Children, and all with eminent scientific qualifications and experience to ensure the broadest possible range of expertise. As of January 1999, the Committee had the assistance of 85 scientists involved in 6 subcommittees and 10 working groups. The role of the Scientific Committee is to assist and advise the Board on matters pertaining to scientific or technical questions, food inspection and nutrition. The Board also receives advice from the Consultative Council—a body that includes consumers as well as food industry representatives. The Council has 24 members, 12 appointed by the Department of Health and Children and 12 by the Board. The Chief Executive reports to the Board and is ultimately responsible for the implementation of policies and the achievement of the Authority’s goals. According to Authority officials, there will be 60 staff to coordinate about 2,000 staff performing the food safety inspections and other activities through the service agreements. The Authority is divided into four divisions. The Technical and Scientific Division develops policy and sets standards, priorities, quality levels, and procedures for technical and scientific issues. It serves the research needs of the Scientific Committee and its subcommittees. In addition, it collects and assesses surveillance data on foodborne illnesses. The Operations Division focuses on enforcement by overseeing the service contracts’ implementation. The Operations Division coordinates, controls, and harmonizes all activities under these contracts. It also carries out an audit program throughout the food chain to help ensure compliance with the Authority’s decisions on standards and processes. The Communications, Education, and Information Division develops and implements policy on communications, education, and information for consumers, enforcement officers, public health professionals, and others in the food chain. The Corporate Services Division develops and implements accounting, human resources, information technology, and legal services. Food Safety: Opportunities to Redirect Federal Resources and Funds Can Enhance Effectiveness (GAO/RCED-98-224, Aug. 6, 1998). Food Safety: Fundamental Changes Needed to Improve Food Safety (GAO/RCED-97-249R, Sept. 9, 1997). Food Safety: New Initiatives Would Fundamentally Alter the Existing System (GAO/RCED-96-81, Mar. 27, 1996). Food Safety and Quality: Uniform, Risk-Based Inspection System Needed to Ensure Safe Food Supply (GAO/RCED-92-152, June 26, 1992). Food Safety and Quality: Who Does What in the Federal Government (GAO/RCED-90-19A & B, Dec. 21, 1990). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the experiences of foreign countries that are consolidating their food safety responsibilities, focusing on the: (1) reasons for and approaches taken to consolidation, the costs and savings, if any, associated with consolidation, and efforts to assess the effectiveness of the revised food safety systems; and (2) lessons that the United States might learn from these countries' experiences in consolidating their food safety functions. GAO noted that: (1) the reasons the four countries have consolidated, or are in the process of consolidating, their organizational responsibilities for food safety activities differed, as did the approaches they took; (2) however, all four countries had similar views regarding the costs and benefits of consolidation and the need to evaluate their consolidation efforts; (3) in deciding to consolidate food safety responsibilities, two of the countries--Great Britain and Ireland--were responding to public concerns about the safety of their food supplies and chose to consolidate responsibilities in the agencies that report to their ministers of health; (4) the other two countries--Canada and Denmark--were more concerned about program effectiveness and cost savings and consolidated activities in agencies that report to their ministers of agriculture, who already control most of the food safety resources; (5) all four countries are incurring short-term start-up costs in establishing their new agencies but are expecting long-term benefits in terms of money saved, more food safety for the money spent, and better assurance of food safety; (6) none of the countries had developed performance measures and data early in the consolidation process to assess the effectiveness of their new systems; (7) foreign officials identified several common lessons from their experiences that they believe could be broadly applicable to any U.S. consolidation effort; (8) in all four countries, a consensus had to be developed on the need to consolidate food safety responsibilities; (9) certain management initiatives were needed to establish any new agency; (10) adequate funding for start-up costs was also necessary; (11) furthermore, to help ensure the new agencies' early success, critical operational concerns, such as having the flexibility to shift program resources to the highest food safety priorities, establishing a common organizational culture, and ensuring openness in the decisionmaking process, were important factors that had to be addressed; and (12) evaluation criteria and mechanisms need to be established early in the process in order to assess the new agency's performance. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
After the September 11, 2001, terrorist attacks, Congress passed the Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism (USA PATRIOT) Act of 2001, which amended and broadened the scope of the Bank Secrecy Act (BSA) to include additional financial industry sectors and a focus on the financing of terrorism. Subsequently, Congress passed the Intelligence Authorization Act for 2004, which established Treasury’s Office of Intelligence and Analysis (OIA). OIA is a member of the Intelligence Community, as defined under Executive Order 12333, as amended. The Intelligence Reform and Terrorism Prevention Act of 2004 identified the Secretary of the Treasury or his or her designee as the lead U.S. government official to the Financial Action Task Force (FATF), to continue to convene an interagency working group on FATF issues. TFI’s mission is to marshal Treasury’s policy, enforcement, regulatory, and intelligence functions in order to safeguard the U.S. financial system from abuse and sever the lines of financial support to international terrorists, WMD proliferators, narcotics traffickers, money launderers, and other threats to U.S. national security. The formation of TFI combined both existing and new units of Treasury. Five key components are included under the umbrella of TFI: Office of Foreign Assets Control (OFAC), formed in 1950, administers and enforces sanctions. Financial Crimes Enforcement Network (FinCEN), formed in 1990, administers and enforces the BSA and serves as the United States’ financial intelligence unit (FIU). Treasury Executive Office for Asset Forfeiture (TEOAF), formed in 1992, administers the Treasury Forfeiture Fund—the receipt account for the deposit of non-tax forfeitures made by member agencies. Office of Terrorist Financing and Financial Crimes (TFFC), established in 2004, serves as TFI’s policy and outreach arm. OIA, also established in 2004, performs Treasury’s intelligence functions, integrating Treasury into the larger Intelligence Community, and providing intelligence support to Treasury leadership. FinCEN is a Treasury bureau; the other four components are offices within TFI, which is a part of Treasury’s structure of departmental offices. Figure 1 shows TFI’s current organizational structure. To achieve its mission, TFI components often work with the following: Other U.S. government agencies. For instance, OFAC works with State and Justice, among others, to designate individuals and organizations under 21 separate sanctions programs. TFFC also works with State, Justice, and other agencies in developing and advocating a U.S. position in international forums related to money laundering and illicit financing. In addition, TEOAF works with State and Justice to administer sharing of large case forfeiture proceeds with foreign governments, pursuant to international treaties, whose law enforcement personnel cooperated with U.S. federal investigations. Other TFI components. For example, OIA provides information to OFAC to assist in making decisions regarding whether to pursue designations of individuals and organizations. For completed designations, OIA also works with OFAC to declassify intelligence information for public dissemination. Private sector. For example, in its role as the Secretary’s delegated administrator of the BSA, FinCEN regularly interacts with the private sector, including the financial sector. One such mechanism for maintaining formal ties to the private sector is Treasury’s BSA Advisory Group. FinCEN also conducts informal consultations with financial institutions regarding their individual financial intelligence efforts. Foreign governments and international organizations. Treasury heads the U.S. delegation to the FATF, an international body that develops and implements multilateral standards relating to anti-money laundering and counterterrorist financing. TFFC leads this effort on behalf of Treasury. Similarly, FinCEN works with foreign governments to develop and strengthen capabilities of their FIUs as well as to respond to requests for assistance from foreign FIUs, which totaled more than 1,000 in fiscal year 2008. As shown in figure 2, the size of TFI’s staff has grown from approximately 500 in fiscal year 2005 to approximately 650 in fiscal year 2008. FinCEN, with 299 full-time equivalents (FTE) in fiscal year 2008, is TFI’s largest component, and OIA gained the most staff—90—from fiscal years 2005 through 2008. As shown in figure 3, TFI’s budget has grown from approximately $110 million in fiscal year 2005 to approximately $140 million in fiscal year 2008. With a budget of approximately $86 million, FinCEN has the largest budget of any TFI component. In addition, OIA’s budget has grown at the greatest rate, from about $9 million in fiscal year 2005 to about $20 million in fiscal year 2008. According to TFI, it undertakes five functions in order to achieve its mission. Officials from TFI and its interagency partners cited strong collaboration with TFI in several areas, but differ about the quality of collaboration regarding U.S. participation in some international forums. According to TFI, it undertakes five functions to safeguard the financial system from illicit use and to combat rogue nations, terrorist supporters, WMD proliferators, money launderers, drug kingpins, and other national security threats. These functions are (1) building international coalitions, (2) analyzing financial intelligence, (3) administering and enforcing the BSA, (4) administering and enforcing sanctions, and (5) administering forfeited funds. TFI employs two primary means to build international coalitions to support U.S. national security interests. These are deepening engagement in international forums and improving international partners’ capacity. Deepening engagement in international forums. TFI and other U.S. agencies participate in several international organizations intended to strengthen the international financial system so that it cannot be exploited by criminal networks. Two examples are the FATF and the Egmont Group. TFFC leads the U.S. delegation to the FATF, while FinCEN leads U.S. participation in the Egmont Group. According to TFI officials, U.S. participation in such organizations provides a unique opportunity to engage with international counterparts in the effort to develop international standards and a framework for countries to implement legal regimes that protect the international financial system from abuse. TFI also uses international forums to advance the U.S. agenda in areas such as nonproliferation. For example, according to TFI, it has been working closely with other G-7 countries to determine what steps can be taken to isolate proliferators from the international financial system through multilateral action. For instance, according to TFI officials, they are working with State to encourage the more than 85 countries that participate in the Proliferation Security Initiative to use financial measures to combat proliferation support networks. In addition to playing a leadership role in these organizations and forums, TFI officials report that they are also working to expand these organizations’ membership so as to broaden the reach of international financial standards. For example, as of March 2009, FinCEN was sponsoring 12 countries’ membership in the Egmont Group, including Afghanistan, Saudi Arabia, Pakistan, and Yemen. According to FinCEN officials, the addition of such new members will greatly strengthen FinCEN’s ability to obtain valuable information related to the activities of illicit financial networks. Improving international partners’ capacity. As part of TFI, FinCEN has made engagement with foreign FIUs in the detection and deterrence of crime one of its strategic objectives. To accomplish this objective, FinCEN has undertaken a variety of efforts to strengthen the global network of FIUs. For example, according to FinCEN officials, they engage in a variety of cooperative efforts with other FIUs aimed at fostering productive working relationships and best practices. In addition, according to TFI officials, they participate in mutual evaluation studies, as part of its participation in the FATF, to identify measures to improve other FATF members’ regulatory regimes related to combating money laundering and terrorist financing. For example, in fiscal year 2008, the FATF performed six mutual evaluations; the United States delegation, led by TFFC, sent representatives to serve as assessors for four of these mutual evaluations. TFI officials cite OIA’s analysis of financial intelligence as a critical part of TFI’s efforts because it underlies TFI’s ability to utilize many of its tools. The first step in disrupting and dismantling illicit financial networks is identifying those networks, according to TFI officials. They said that the creation of OIA was critical to TFI’s ability to effectively identify these illicit financial networks. As a member of the broader intelligence community, OIA performs analysis of intelligence information related to national security threats with a view toward potential action and utilization of tools available to TFI. Staff in other TFI components and TFI management then use this intelligence analysis to draft papers to implement such strategies or actions. In addition, TFI utilizes intelligence analysis to assess the impact of the actions it takes. For example, according to the Under Secretary for TFI, intelligence analysts have assessed the impact of previous financial actions taken to address the national security threat posed by North Korea. Those assessments were then used to shape the U.S. policy response to the most recent missile and nuclear tests by North Korea. According to TFI officials, FinCEN’s administration of the BSA plays a key role in TFI’s ability to achieve its mission. The BSA includes a variety of reporting and record-keeping requirements that provide useful information to law enforcement and regulatory agencies. For example, pursuant to the BSA, Treasury (FinCEN) requires financial institutions to report suspicious financial activities relevant to a possible violation of law. Such suspicious activity reports (SAR) are then analyzed by FinCEN and made available to the law enforcement and regulatory communities. In 2007, financial institutions filed nearly 1.3 million SARs, which federal, state, and local law enforcement agencies use in their investigations of money laundering, terrorist financing, and other financial crimes. The BSA, as amended by the USA PATRIOT Act, also grants Treasury additional authorities, which are delegated to FinCEN, to combat money laundering and terrorist financing. For example, Section 311 of the USA PATRIOT Act amended the BSA to provide an additional tool to safeguard the U.S. financial system from illicit foreign financial institutions and networks. According to TFI officials, Section 311 is an important and extraordinarily powerful tool, as it authorizes Treasury to find a foreign jurisdiction, foreign financial institution, type of account, or class of transaction as being of “primary money laundering concern.” Such a finding enables Treasury to impose a range of special measures that U.S. financial institutions must take to protect against illicit financing risks posed by the target. These special measures range from enhanced record- keeping and reporting requirements up to prohibiting U.S. financial institutions from maintaining certain accounts for foreign banks if they involve foreign jurisdictions or institutions found to be of primary money laundering concern. The imposition of economic sanctions has been a long-standing tool for addressing a range of national security threats. OFAC currently maintains primary responsibility for administering more than 20 separate sanctions programs. (See app. II for a list of current U.S. sanctions programs.) These sanctions programs fall into two categories: (1) country-based programs that apply sanctions to an entire country—such as Cuba, Iran, or Sudan— and (2) targeted, list-based programs that address individuals or entities engaged in specific types of activities such as terrorism, WMD proliferation, or narcotics trafficking. For example, according to TFI officials, they use the authorities under the International Emergency Economic Powers Act and Executive Order 13224 to designate those who provide support to terrorists, freezing any assets they have under U.S. jurisdiction and preventing U.S. persons from doing business with them. From fiscal years 2004 through 2008, Treasury designated or supported the designation of more than 1,900 individuals and organizations under various sanctions programs. To help ensure compliance with U.S. sanctions programs, Treasury also has the authority to impose civil penalties on individuals and organizations that violate U.S. sanctions. From 2004 through 2008, OFAC imposed more than 1,500 civil penalties related to violations of its sanctions programs. As a result, OFAC assessed nearly $15 million in penalties. According to TEOAF, an important tool in the U.S. fight against money laundering is asset forfeiture. Forfeiture assists in the achievement of TFI’s mission in two ways. First, asset forfeiture strips away the profit from illegal activity, thus making it less attractive. According to TEOAF, in fiscal year 2008 it received more than $500 million in total forfeiture revenue; the majority, after net expenses, came from forfeitures processed by Immigration and Customs Enforcement and the Internal Revenue Service–Criminal Investigation. Second, according to the Director of TEOAF, the revenue derived from such forfeited assets can be used to fund federal law enforcement activities, including initiatives directed at further combating illicit financing networks. For example, in fiscal year 2008, TEOAF provided approximately $1 million in funding to Immigration and Customs Enforcement to provide training to international partners. Specifically, the funding was provided to allow the expansion of existing training activities to assist in combating bulk cash smuggling by terrorist groups and other criminal networks. Collaborating with interagency partners is important to TFI’s ability to perform effectively. Many of the tools TFI utilizes to combat national security threats involve multiple agencies reviewing the proposed action. For example, according to Treasury officials, they consult with officials from State, Justice, and the Department of Homeland Security on decisions to designate individuals or organizations that support terrorism. In addition, other tools, such as advocating actions to strengthen the international financial system through the FATF, benefit from the expertise and input from collaboration with a variety of agencies, including State, Justice, the Securities and Exchange Commission, the Department of Homeland Security, and others. Prior GAO work has identified several practices that can enhance and sustain such interagency collaboration. One such practice is establishing compatible policies, procedures, and other means to operate across agency boundaries. Another practice is developing a mechanism for monitoring, evaluating, and reporting on the results of collaborative efforts. Officials at TFI and other agencies said that they generally are satisfied with the quality of interagency collaboration. TFI’s interagency partners report close, collaborative relationships in many situations. For example, State officials told us that they have strong working relationships with officials in almost all TFI components. They highlighted their collaboration with TFI during the designation process and suggested that it is generally effective. These officials commented that if State has information from its embassies abroad that indicates that a specific designation would be particularly damaging to U.S. foreign policy interests, they relay this information to Treasury and discuss alternative approaches. State officials added that the designation process operates effectively, even when agencies may have disagreements over a particular designation, because the National Security Council leads a process to coordinate terrorism designations. It serves as an impartial arbiter that prevents any single agency from exerting too much influence. In addition, Justice officials described a strong working relationship with FinCEN regarding asset forfeiture and money laundering issues. Specifically, they recounted effective communication and information sharing. For example, Justice officials told us that FinCEN has granted Justice access to BSA data, thus allowing Justice to perform its own analyses for law enforcement purposes. Additionally, Justice officials said that FinCEN has helped them utilize its network of international contacts at other countries’ FIUs. However, TFI’s interagency partners have expressed concerns regarding collaboration in other areas. For example, in September 2008, we reported that State and Justice expressed concerns regarding Treasury’s consultations with them when implementing Section 311 of the USA PATRIOT Act. In addition, TFI and other agencies’ officials differed about the effectiveness of interagency collaboration for the function of building international coalitions, particularly when participating in the international forums of the FATF and FATF-Style Regional Bodies (FSRB). On the one hand, TFFC officials suggest that interagency collaboration regarding the FATF and FSRBs has been highly effective over the past 5 years and that Treasury’s ability to effectively lead the U.S. delegation has been greatly strengthened by the participation of a wide variety of regulatory, law enforcement, and other agencies. The Deputy Assistant Secretary for Terrorist Financing and Financial Crimes added that during this time, there have been no major disagreements between agencies regarding the positions the United States should take in such international forums. TFI officials also stated that interagency collaboration runs smoothly and that they were unaware of any significant concerns regarding the quality of interagency collaboration. Officials from State and Justice, however, indicated that the quality of interagency collaboration regarding the FATF and FSRBs has declined substantially over the past 5 years. These officials expressed two types of concerns regarding TFI’s collaboration with other agencies regarding participation in international forums: (1) the exclusion of non-Treasury personnel in key situations and (2) the extent to which TFI makes unilateral decisions regarding the U.S. government position. With regard to TFI’s exclusion of non-Treasury personnel in key situations, TFI and other agencies differ. State and Justice officials cited several examples of situations they believe undermined U.S. effectiveness at combating illicit financing networks. For example, according to State officials, a State official who has taken the necessary training has not been allowed to participate as a member of the U.S. team conducting FATF mutual evaluations. According to these officials, this results in the exclusion of senior staff with significant experience and expertise that could benefit the evaluation teams. In response, TFFC officials indicated that they have included other agencies in the mutual evaluation process. For example, they indicated that officials from Justice and other agencies participated in at least six mutual evaluations from 2004 through 2009. According to TFI, it encourages and attempts to facilitate such participation by other agency officials who have attended the necessary 1- week training course and whose agencies will pay for their travel to foreign countries to conduct and defend their evaluations. Additionally, Justice officials stated that when TFI allows other agencies to review and comment on U.S. policy proposals related to anti-money laundering and counterterrorist financing, it consistently provides too little time for review. Specifically, Justice officials told us that TFI regularly provides agencies 24 hours to review and provide comments on policy proposals, which may make it impossible for agencies to conduct an appropriate review and effectively excludes them from the process. According to TFI officials, they distribute materials as soon as possible; for FATF materials this occurs within 24 hours of receiving them, though they acknowledge that they often are provided short deadlines by the FATF Secretariat. According to TFI officials, they sometimes request an extension of the deadline or submit the U.S. response late in order to obtain interagency views. With regard to concerns about TFI’s unilateral decision making, TFI and other agencies also differed. State and Justice officials cited a situation related to the U.S. position on how to treat the European Union (as a single entity or as separate countries) for the purposes of cash-smuggling regulations. According to State and Justice officials, during interagency meetings prior to the FATF working group session at which the issue was to be discussed, a consensus U.S. position was developed. However, State and Justice officials said that at the FATF plenary meetings, Treasury officials advocated a position that was different from the consensus U.S. position agreed to in advance of the meeting. A Treasury official told us that the agency did not deviate from the consensus position agreed to before the meeting. Justice, State, and Treasury officials said that there is no guidance specifying how the interagency process should operate to develop U.S. positions in advance of FATF meetings. Specifically, there is no guidance regarding the process or time frames for circulating or approving U.S. policy statements to be made at international meetings to discuss anti- money laundering and counterterrorist financing issues. In addition, there is no formal mechanism for monitoring, evaluating, or reporting on the results of agencies’ collaborative efforts. According to State and Justice officials, the inconsistent quality of interagency collaboration may undermine some efforts to combat illicit financing networks through international forums. State officials suggested that the exclusion of non-Treasury personnel may mean that expertise available within the U.S. government is not effectively utilized, thus potentially weakening the United States’ ability to influence international partners’ actions. In addition, they suggested that unilateral action by Treasury in international forums may cause confusion among international partners regarding the nature of the U.S. position on key issues. On the basis of comments they received from foreign officials, Justice and State officials concluded that such confusion might weaken the United States’ ability to influence the activities of international partners. TFFC responded that it has not observed any confusion among its international partners in FATF regarding the U.S. position on key issues. Justice and State officials did not raise similar concerns concerning FinCEN’s collaboration when participating with them on issues related to the Egmont Group. In contrast, Justice officials expressed some criticisms of more recent collaboration with OFAC on issues such as information sharing. OFAC responded that it has regular contact with Justice with respect to enforcement matters and that the two agencies have an ongoing dialogue regarding information sharing. OFAC also noted that only a small subset of its enforcement cases involve the type of knowing conduct that is appropriate for referral to criminal authorities. While TFI has conducted strategic planning activities at different levels within the organization, TFI as a unit has not fully adopted certain key practices. In particular, TFI has not clearly aligned its resources with its priorities. TFI’s strategic planning documents do not consistently integrate discussion of the resources needed to achieve TFI’s strategic objectives. In addition, TFI’s resource levels for each component cannot be clearly linked to its workload. Also, while some TFI components have taken the initiative to conduct some workforce planning activities, TFI management has not developed a process for conducting comprehensive strategic workforce planning. Our review of TFI’s and its components’ strategic planning documents and discussions with TFI officials showed that TFI has not clearly aligned its resources with its priorities. TFI officials indicated that priorities could be identified in TFI’s strategic plan. TFI identified four relevant strategic plans: one for TFI as a whole and one each for FinCEN, OIA, and TEOAF. Strategic plans are used to communicate what an organization seeks to achieve in the upcoming years, according to Treasury instructions. The goals and strategies presented in the plan provide a road map for both the organization and its stakeholders. Strategic plans should guide the formulation and execution of the budget as well as other decision making that shapes and guides the organization. These plans are a tool for setting priorities and allocating resources consistent with these priorities, according to Treasury. Our previous work has shown that strategic plans should clearly link goals and objectives to the resources needed to achieve them and are especially important in those cases where agencies submit a strategic plan for each of their major components and a strategic overview that under the guidance is to show the linkages among these plans. Government Performance and Results Act guidance also establishes six key elements of successful strategic plans, and Treasury’s instructions suggest plan formats. However, we found that TFI’s and its components’ strategic plans do not consistently integrate discussion of the resources necessary to achieve TFI objectives. Specifically, we found that FinCEN’s and TEOAF’s strategic plans contain some discussion of the resources needed to achieve their objectives. TFI’s and OIA’s strategic plans do not contain discussion of the resources needed to achieve their objectives. OFAC and TFFC do not currently have strategic plans. While TFI’s strategic plan includes a mission statement; a list of threats, goals, and objectives; and means and strategies, it does not include any discussion or analysis of TFI’s resource needs. Moreover, TFI’s strategic plan lists all four of its goals, and each of its means and strategies under each goal as equivalent: it does not indicate any prioritization among its various goals, means, and strategies. The Under Secretary for TFI said that he uses the annual budget process to align resources with priorities. However, two reasons suggest why the results of the budget process do not necessarily reflect TFI’s strategic priorities. First, there are many other factors that affect the budget process that are unrelated to TFI’s priorities. The amount of resources TFI seeks is integrated into a larger Treasury budget request, which may entail modifying TFI’s request. Congress, then, may choose to provide more or less than the amount of resources to TFI that Treasury requested. Second, the annual budget process reflects priorities only for a given year, unlike strategic plans, which are intended to be multiyear documents and thus reflect longer-term priorities. Further, the linkage between the resources allocated to each TFI component and its workload is unclear. Estimated workload measures for each of TFI’s components show a growth in workload since 2005, but it is unclear how this growth relates to resource increases. For example, one measure of FinCEN’s workload—the number of SARs it must analyze— has increased 50 percent and the number of employees in FinCEN has increased 3 percent. In addition, TEOAF has seen an 83 percent increase in the value of seized assets it manages and the number of FTEs has grown 10 percent. Further, the number of OFAC licensing actions increased 56 percent while the number of FTEs grew 18 percent. Additionally, OIA experienced a more than 500 percent increase in intelligence taskings from 2006 to 2008 and has received a 200 percent increase in FTEs. Finally, TFFC estimates that its workload related to developing policy papers, legislative and rulemaking papers, trips, and public outreach events increased between 100 and 200 percent from 2005 to 2009; its FTEs grew nearly 80 percent from 2005 to 2008. According to TFI officials, their ability to allocate resources to their highest priorities is constrained in some circumstances. The Under Secretary and other TFI officials identified activities related to Iran and North Korea as persistent priorities. However, OFAC officials noted that in spite of the importance of Iran- and North Korea-related activities, they must expend a significant amount of resources on implementing the Cuba embargo. With regard to acting on specific licensing requests for exports and travel to Cuba, according to OFAC officials, they have little flexibility under the law. OFAC is required to process all license applications that it receives. For 2005 through 2008, this amounted to more than 200,000 licensing actions—more than 95 percent of which related to the Cuba program. In 2008 alone, OFAC responded to nearly 60,000 licensing requests related to the Cuba travel program. OFAC officials characterized this situation as a resource burden. In contrast, according to OFAC officials, they have some flexibility regarding how they enforce the Cuba sanctions program, for example, through the assessment of civil penalties for violations. According to OFAC officials, for many years (through 2005), OFAC assessed a large number of civil penalties related to the Cuba travel regulations. As violations of these regulations have a relatively small financial penalty associated with them, the average penalty amount was relatively low. Since 2006, according to OFAC officials, they have consciously utilized the flexibility they are allowed in order to dedicate their enforcement resources to higher-value areas (e.g., those related to trade with Cuba, Iran, and North Korea). As a result, the number of penalties assessed annually related to the Cuba sanctions program has dropped significantly, from 498 in 2005 to 46 in 2008. At the same time, the average value of OFAC’s civil penalties for violations of all sanctions programs has increased significantly, from approximately $2,400 in 2005 to nearly $31,000 in 2008. Despite efforts by some components, TFI management has not yet conducted comprehensive activities to address the key principles of strategic workforce planning. According to the Under Secretary, TFI’s workforce is its greatest asset, and ensuring that it is the right size and includes the right skills is critical to TFI’s future ability to achieve its mission. Prior GAO work has identified key principles to assist agencies in conducting strategic workforce planning. Among these principles are (1) involving top management, employees, and other stakeholders in developing, communicating, and implementing the strategic workforce plan, and (2) monitoring and evaluating the agency’s progress toward its human capital goals and the contribution that human capital results have made toward achieving programmatic results. According to TFI officials, some TFI components have taken the initiative individually to perform some strategic workforce planning activities. Specifically, as a Treasury bureau, FinCEN has an internal human resources group that, among other things, performs some strategic workforce planning activities. For example, according to FinCEN officials, they undertook an effort to identify mission critical occupations, which resulted in designating three positions as mission critical. As a result, FinCEN developed plans to address human capital challenges related to these occupations and regularly reports to Treasury’s Office of the Deputy Assistant Secretary for Human Resources and Chief Human Capital Officer on its progress. In addition, OIA has taken a variety of steps to address human capital challenges. For example, according to OIA officials, to address challenges in recruiting and retaining intelligence analysts, OIA cataloged the human capital flexibilities available to provide recruiting and retention incentives. As a result, OIA officials indicated that they have identified and are now able to utilize a variety of human capital flexibilities, such as student loan repayment to attract and retain staff and the Pat Roberts Intelligence Scholarship Program to pay for the continuing educational needs of its analysts. Nonetheless, TFI management has not yet conducted comprehensive activities to address the key principles of strategic workforce planning for TFI as a whole. TFI top management has not set the overall direction and goals of workforce planning or evaluated progress toward any human capital goals. The Under Secretary for TFI told us that since the creation of TFI, growing OIA’s human capital has been one workforce planning priority. He also stated that he has conducted additional targeted workforce planning in consultation with the heads of the largest TFI components, such as FinCEN. However, neither TFI officials nor Treasury human capital officials were aware of any explicit workforce planning goals set by TFI management. In addition, TFI officials were unaware of any formal reviews or reports that evaluated the contribution of human capital results to achieving programmatic goals. Moreover, TFI currently lacks an effective process for conducting comprehensive strategic workforce planning. According to the Under Secretary for TFI, most workforce planning takes place as a part of the annual budget process. TFI has not established a separate, comprehensive strategic workforce planning process led by TFI management. According to an official from Treasury’s Office of the Deputy Assistant Secretary for Human Resources and Chief Human Capital Officer, the office has provided targeted workforce planning assistance to OIA and, in spring 2009, began discussing how they could assist TFI in broader workforce planning efforts. In particular, they cited the need to conduct an overall workforce analysis and succession planning. According to TFI’s Senior Resource Manager, TFI’s workforce planning mainly occurs as a component of the annual budget preparation process. As a part of this process, individual components can request additional staff resources for priority initiatives they identify. TFI management then evaluates these individual proposals and determines what will be included in TFI’s budget request. Without the benefit of comprehensive strategic workforce planning to assist in identifying solutions, it is unclear whether TFI will be able to effectively address persistent workforce challenges. These include the following: Lack of comprehensive training needs assessment. While some TFI components have assessed the training needs of their staff, there has been no similar TFI-wide effort. Without such an assessment, it is unclear whether TFI staff are being prepared to address the challenges posed by illicit financing in the future. Obstacles to hiring intelligence analysts. According to officials from OIA and Treasury’s Office of the Deputy Assistant Secretary for Human Resources and Chief Human Capital Officer, OIA continues to be at a competitive disadvantage relative to other agencies in the Intelligence Community regarding recruiting. Specifically, according to Treasury officials, most other agencies in the Intelligence Community can hire intelligence analysts into the excepted service, thus bypassing the need for competitive selection of candidates. In addition, OIA lacks direct hire authority for its intelligence analysts. According to OIA officials, these challenges make OIA’s hiring process more complicated and lengthier than those of other agencies in the Intelligence Community. TFI has not yet developed an appropriate set of performance measures, but continues to attempt to improve its efforts. Since TFI was formed, its individual performance measures have varied substantially in number and the extent to which they address attributes of successful performance measures that GAO has identified. For fiscal year 2008, the performance measures of TFI’s components vary in the extent to which they address attributes of successful performance measures identified by GAO. TFI’s performance measures address many, but not all, of these attributes. According to Treasury officials, TFI recognizes the need to improve its performance measures, and is developing a new set of measures to assess its performance. However, our review of a draft version of these revised measures suggests that some concerns would remain if they are implemented as proposed. As shown in figure 4, since its formation in 2004, TFI’s performance measures have varied over time. TFI reported on 11 total measures in fiscal year 2005, 9 measures in fiscal year 2006, 10 measures in fiscal year 2007, and 20 measures in fiscal year 2008. The number and content of performance measures have varied within components over time, as well. For example, FinCEN had 6 measures in fiscal year 2007 and 16 in fiscal year 2008. Components have frequently introduced new measures only to discontinue them in subsequent years. For instance, OFAC reported 4 measures in fiscal year 2005, and then discontinued 3 for fiscal year 2006. OIA, newly formed in 2004, reported 1 performance measure in fiscal year 2006 and none the following years. The extent of inconsistency in TFI’s performance measures creates challenges for managers to using performance data in making management decisions. According to TFI officials, the sharp increase in the number of performance measures reported in fiscal year 2008 was a response to the evaluation and recommendations of the Office of Management and Budget’s (OMB) Program Assessment Rating Tool (PART) in 2005 and 2006. The PART process identified potential enhancements to FinCEN’s performance measures, leading to the inclusion of new measures for FinCEN. FinCEN officials said that Treasury performance officials asked that the newly developed measures be added to FinCEN’s contribution to the fiscal year 2008 performance and accountability report. According to officials in Treasury’s Office of Strategic Planning and Performance Management (OSPPM), the nature of FinCEN’s work is operational, making it easier to evaluate the bureau’s performance. TFI’s policy-making components, such as TFFC, have found it more difficult to develop meaningful performance metrics. The performance measures TFI currently has in place also vary in the degree to which they exhibit the attributes of successful performance measures. Prior GAO work has identified nine attributes of successful performance measures. Table 1 shows the nine attributes, their definitions, and the potentially adverse consequences of not having the attribute. TFI’s performance measures address many of these attributes of successful performance measures, but do not fully address other attributes. Figure 5 represents our assessment of TFI’s 20 performance measures versus the key attributes of successful performance measures. According to our analysis, TFI’s 20 measures have many of the attributes of successful performance measures, including the following. Measurable target. All 20 of TFI’s measures have measurable, numerical targets in place. Numerical targets allow officials to more easily assess whether goals and objectives were achieved because comparisons can be made between projected performance and actual results. Limited overlap. We found limited overlap among TFI’s 20 measures, that is, little or no unnecessary or duplicate information provided by the measures. Objectivity. We found all of TFI’s measures to be objective, or reasonably free from significant bias. Governmentwide priorities. We also determined that TFI’s 20 measures are linked to broader priorities such as cost-effectiveness, quality, and timeliness. However, the measures did not fully satisfy the following attributes. Linkage. Six TFI measures are not clearly linked to Treasury goals. For example, TEOAF measures the proportion of its forfeitures that come from high-impact cases. However, it is unclear why high-impact cases in particular are measured as opposed to all cases. Our analysis could not link TEOAF’s measure to broader agencywide goals related to removing or reducing threats to national security. Core program activities. Seven TFI measures do not sufficiently cover core program activities. For example, OFAC has three main responsibilities related to the administration of sanctions: (1) issuing licenses, (2) designation programs, and (3) enforcement through civil penalties. However, OFAC’s one performance measure only assesses cases involving civil penalties resulting from sanctions violations. Balance. We found that TFI’s set of performance measures is not balanced. In fiscal year 2008, TFI reported on 20 measures, 16 of which related to FinCEN’s programs and activities, 1 that related to OFAC, 1 that related to TEOAF, 2 that related to TFFC, and none that related to OIA. As a result, a disproportionate number of measures (16) relate to administering and enforcing the BSA and none to the analysis of financial intelligence. An emphasis on one priority at the expense of others may skew the overall performance and preclude TFI’s managers from understanding the effectiveness of their programs in supporting Treasury’s overall mission and goals. In addition, the lack of balance exhibited by TFI’s measures may give the impression that administering the BSA is prioritized over other functions, such as the analysis of financial intelligence or administration of licensing and designations programs. Treasury officials acknowledge the limits of TFI’s current performance measurement and have been working to enhance its measures, by replacing them with a single new TFI-wide measure. According to OSPPM officials, they began an initiative to overhaul TFI’s performance measurement in 2007. OSPPM officials stated that TFI’s performance measures did not effectively reflect the impact of TFI’s activities. After consultation with each TFI component, OSPPM decided to design a new composite measure that will provide a way to assess how TFI is performing overall as a unit. The new measure would outline the roles and functions of TFI’s components and evaluate the outcomes of their activities. However, the process of reforming TFI’s performance measurement has not been completed. The implementation of the new measure is still uncertain, although TFI management approved its use in May 2009 and components finalized the measures they will contribute. According to a Treasury official, OSPPM decided on the format of the new composite measure after researching other federal agencies’ approaches to performance measurement, as well as those of management consultancies in the private sector. The composite measure takes a similar form to the measure implemented for Treasury’s Office of Technical Assistance (OTA), first reported in Treasury’s fiscal year 2008 performance and accountability report. The measure aims to provide a more comprehensive snapshot of the outcome of OTA’s activities by measuring impact and traction. The composite measure for TFI will align the two Treasury outcomes that relate to their activities with TFI’s performance goals and focus areas, according to Treasury. Each focus area corresponds with a TFI component (OFAC, OIA, TFFC, and FinCEN). The components will track 3 to 6 performance measures and will assign a numeric score to the performance at the end of the year. Each component’s measures will be combined to reach an overall score for the component. In the end, an overall score for TFI will be determined by averaging the individual scores of the components. All TFI components except TEOAF have been involved in the process of developing the composite measure. Both OSPPM and TEOAF officials stated that TEOAF would not be included, since its work did not logically fit in one of the focus areas. OIA, TFFC, and OFAC have developed new measures to assess the impact of their activities. FinCEN will use 5 of its existing measures for its contribution to the composite measure. TFI faces significant challenges in developing and implementing the new composite measure. There is an inherent difficulty in creating quantitative measures for policy organizations, whose activities may not be easily represented with numbers. Many TFI managers pointed to the difficulty of making qualitative information measurable for performance reporting. While the initiative to improve TFI’s performance measurement is a positive step, our preliminary analysis raises concerns regarding the extent to which the new TFI composite measure will allow full and accurate assessment of TFI’s performance. For example, we identified the following concerns: Objectivity and reliability of survey-based measures. OIA has developed surveys to measure the timeliness, relevance, and accuracy of its intelligence support, all-source analysis, and security and counterintelligence. The survey respondents are internal customers of OIA’s products within Treasury such as the Deputy Secretary, Under Secretaries, Assistant Secretaries, Deputy Assistant Secretaries, and senior staff. The objectivity of the surveys is not clear given that respondents’ answers may be biased because they have a vested interest in the outcome, as it is a reflection on their performance. The reliability of the measures is also questionable, as only between 7 and 13 internal customers—rather than external customers in the Intelligence Community—will be asked to complete the survey. TFI believes that while there is no perfect method for evaluating OIA’s performance, the surveys are an effective means for Treasury policymakers to assess OIA’s performance. They also noted their plan to survey customers in other parts of the Intelligence Community in 2010. Lack of validation for some components’ self-assessment-based measures. Some components’ performance measures rely exclusively on self-assessments by component managers and lack external verification. For example, TFFC has 4 measures for which management will compile supporting information and assign a high, medium, or low rating for TFFC’s performance in that area. Treasury and TFI acknowledge (but have not yet addressed) a lack of a process to independently verify TFFC’s self- assessment. OTA’s composite measure, which OSPPM officials cited as similar to TFI’s, also uses elements of self-assessment, but those results are independently validated by an external source and reviewed by Treasury. Calculation of overall TFI score. According to TFI, to calculate the composite measure, individual components’ results will be averaged into a single TFI measure. Since the components are not all contributing the same number of measures to the overall composite measure, averaging components’ scores means components’ individual performance measures are not weighted equally in TFI’s overall measure. Since its creation in 2004, TFI has undertaken a variety of activities to address a broad range of national security threats, such as enhancing the use of financial intelligence against terrorism and the proliferation of weapons of mass destruction. In addition, TFI and its components have taken some steps toward more effective management of TFI as an organization. For instance, TFI and some components have developed strategic plans and have performed workforce planning activities. Nonetheless, TFI has not fully utilized some management tools to create an integrated organization with a consistent, well-documented approach to planning and managing its operations. As a result, additional opportunities for improvement exist. First, despite the critical role interagency collaboration plays in many of TFI’s functions and general approval by key interagency partners, such collaboration may not be as effective as it could be in certain respects. TFI and some of its interagency partners had strikingly different perceptions about the quality of collaborative efforts involving multilateral forums. Lacking clearly documented policies and procedures for collaboration in this area, interagency partners were unsure how to resolve their differences. Without a mechanism to monitor and report on the results of such interagency collaboration, TFI officials were generally unaware that differences existed or what impact they might be having, and thus saw no need to take steps to understand or address them. Second, TFI management has not clearly aligned its resources with its priorities. Without clear, consistent objectives and an understanding of how resources are aligned with them, it may be unclear to Congress, TFI’s interagency partners, or even TFI staff what TFI’s priorities are and whether TFI has sufficient resources to address them. In addition, while some components have undertaken workforce planning activities, TFI management has yet to implement a comprehensive strategic workforce planning process for TFI as a whole. As a result, TFI may be at risk of not having the workforce required to address future national security threats. Finally, TFI’s performance reporting has been uneven. Though TFI has been working to improve its ability to effectively measure its performance as a unit, TFI has not yet developed a set of performance measures that embody the attributes of successful performance measures. Without a set of effective performance measures, it is difficult to judge how well TFI is achieving its mission. To help strengthen Treasury’s ability to achieve its strategic goal of preventing terrorism and promoting the nation’s security through strengthened international financial systems, we recommend that the Secretary of the Treasury direct the Under Secretary for Terrorism and Financial Intelligence to take the following four actions: 1. develop and implement, in consultation with interagency partners participating in international forums related to anti-money laundering and counterterrorist financing issues, (a) compatible policies, procedures, and other means to operate across agency boundaries and (b) a mechanism for monitoring, evaluating, and reporting on interagency collaboration; 2. develop and implement policies and procedures for aligning resources with TFI’s strategic priorities; 3. develop and implement a TFI-wide process, including written guidance, that addresses the key principles of strategic workforce planning; and 4. ensure that TFI’s performance measures exhibit the key attributes of successful performance measures. We provided a draft copy of this report to the Departments of the Treasury, State, and Justice. Justice and State declined to provide comments. Treasury provided comments, which are reprinted in appendix IV. Treasury’s comments highlighted what it views as TFI’s significant contributions since 2005. Treasury said that TFI has helped reduce the threat of terrorist financing, stating that al Qaeda is in its worst financial position in at least 3 years. In addition, Treasury highlighted TFI’s efforts to counter the financing of proliferation, for example, using Executive Order 13382 to isolate banks, companies, and individuals tied to North Korean, Iranian, and Syrian proliferation. Treasury’s comments also discussed ongoing or planned actions related to our four recommendations: With regard to our recommendation that TFI develop and implement policies and procedures to operate across agency boundaries and develop a mechanism for monitoring, evaluating, and reporting on interagency collaboration, the Under Secretary for Terrorism and Financial Intelligence indicated that his counterparts in other agencies have never expressed concerns about process or substance to him regarding TFI’s collaboration. Nonetheless, Treasury stated that it would redouble its efforts to coordinate with other agencies, but did not identify specific steps it plans to take. As discussed in our report, we recommend that such steps include developing clear policies for conducting and monitoring the results of interagency collaboration. In response to our recommendation to develop and implement policies and procedures for aligning resources with TFI’s strategic priorities, Treasury indicated that TFI is working to improve its processes in this area. While Treasury stated that its use of the annual budget process has worked well to match resources to strategic goals, we have concluded that the annual budget process does not necessarily reflect TFI’s strategic priorities, in part because it reflects priorities for only a given year and not longer- term priorities. In relation to our recommendation to develop and implement a TFI-wide process to address the key principles of strategic workforce planning, Treasury commented that it is working with Johns Hopkins University’s Capstone Consulting to develop a workforce planning model for Treasury. As a part of this effort, TFI plans to develop and disseminate written guidance establishing a process to align resources with TFI and Treasury strategic goals in the next 12 months. Finally, Treasury stated that it will work to implement our recommendation to ensure that TFI’s performance measures exhibit the key attributes of successful performance measures. At the same time, Treasury contends that TFI’s true performance will often be best conveyed through briefings to those who possess the appropriate security clearances. To ensure that such briefings provide systematic evidence regarding TFI’s performance, they should include assessments based on performance measures that exhibit the key attributes of successful performance measures discussed in this report. Further, we would note that using classified information to help assess TFI’s performance does not preclude TFI from developing unclassified performance measures or from producing an unclassified assessment of its performance. In fact, Treasury’s statements about the financial condition of al Qaeda referenced in its response to this report provide Treasury’s assessment of TFI’s impact on al Qaeda without disclosing classified information. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees as well as the Secretaries of the Treasury, State, and Justice. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-4347 or [email protected]. GAO staff who contributed to this report are included in appendix V. To analyze the Office of Terrorism and Financial Intelligence’s (TFI) use of its tools to address national security threats, we reviewed Treasury reports and documents related to its efforts since 2004. For example, we reviewed all of Treasury’s performance and accountability reports and FinCEN’s annual reports since TFI was formed. We also reviewed other documents discussing activities involving TFI, including the National Money Laundering Strategy and the National Strategy for Combating Terrorism. To identify practices for enhancing interagency collaboration, we reviewed prior GAO reports. We then interviewed officials from Treasury and its key interagency partners (the Departments of State and Justice) to understand TFI’s processes for interagency collaboration. To analyze TFI’s efforts to conduct strategic resource planning, we reviewed a variety of Treasury documents. To identify TFI’s priorities, we reviewed documents such as Treasury’s performance and accountability reports, congressional testimony by the Under Secretary for Terrorism and Financial Intelligence, and TFI’s Web site. In addition, we reviewed documentation from TFI and its components related to strategic planning, including the current strategic plans for TFI and each component. Further, we reviewed TFI data regarding the number of staff (full-time equivalents or FTE) in each TFI component for fiscal years 2005 through 2008. We then obtained data from TFI components to illustrate how their workload has changed over time. We determined that these data are sufficiently reliable for the purpose of this report. Additionally, we reviewed prior GAO work related to principles of effective strategic workforce planning. To determine the extent to which TFI’s practices reflect these principles, we interviewed TFI management, including the Under Secretary for Terrorism and Financial Intelligence and managers from TFI components. Further, we interviewed officials from Treasury’s Office of the Deputy Assistant Secretary for Human Resources and Chief Human Capital Officer. To analyze the extent to which TFI’s performance measures provide an effective assessment of TFI’s performance, we reviewed Treasury’s reporting on TFI’s performance. Specifically, we analyzed the performance measures contained in Treasury’s performance and accountability reports for fiscal years 2005 through 2008. We also evaluated TFI’s performance measures for fiscal year 2008 against key attributes of successful performance measures. To perform this evaluation, two analysts independently assessed each of the performance measures against the nine attributes identified in the specifications for each attribute included in that report. Those analysts then met to discuss and resolve any differences in the results of their analyses. A supervisor then reviewed and approved the final results of the analysis. To obtain information on TFI’s process to improve its set of performance measures, we interviewed officials from each TFI component and Treasury’s Office of Strategic Planning and Performance Management. We also obtained a copy of draft TFI performance measures that will be presented to the Office of Management and Budget for its review. We then interviewed officials from each TFI component and Treasury’s Office of Strategic Planning and Performance Management regarding how the data for these draft performance measures would be obtained and how the overall TFI composite measure would be developed. We also present data on TFI staffing and budget for fiscal years 2005 through 2008. As these data are presented for background purposes, we did not assess their reliability. Appendix II: Current U.S. Sanctions Programs Office of Foreign Assets Control country-based sanctions programs Office of Foreign Assets Control list-based sanctions programs Liberia (former regime of Charles Taylor) In addition to the individual named above, Jeff Phillips (Assistant Director), Jason Bair, Lisa Reijula, Katherine Brentzel, Martin de Alteriis, and Mary Moutsos made key contributions to this report. Elizabeth Curda, Karen Deans, Cardell Johnson, Barbara Keller, and Hugh Paquette also contributed to the report. | In 2004, Congress combined preexisting and newly created units to form the Office of Terrorism and Financial Intelligence (TFI) within the Department of the Treasury (Treasury). TFI's mission is to integrate intelligence and enforcement functions to (1) safeguard the financial system against illicit use and (2) combat rogue nations, terrorist facilitators, and other national security threats. In the 5 years since TFI's creation, questioned have been raised about how TFI is managed and allocates its resources. As a result, GAO was asked to analyze how TFI (1) implements its functions, particularly in collaboration with interagency partners, (2) conducts strategic resource planning, and (3) measures its performance. To conduct this analysis, GAO reviewed Treasury and TFI planning documents, performance reports, and workforce data, and interviewed officials from Treasury and its key interagency partners. TFI undertakes five functions, each implemented by a TFI component, in order to achieve its mission. TFI officials cite the analysis of financial intelligence as a critical part of TFI's efforts because it underlies TFI's ability to utilize many of its tools. They said that the creation of OIA was critical to Treasury's ability to effectively identify illicit financial networks. To achieve its mission, TFI's five components often work with each other, other U.S. government agencies, the private sector, or foreign governments. Officials from TFI and its interagency partners cited strong collaboration in many areas, such as effective information sharing between FinCEN and the Justice Department (Justice). Officials differed, however, about the quality of interagency collaboration involving international forums. Treasury officials who led this collaboration stated that it runs smoothly and that they were unaware of any significant concerns, while Justice and State officials reported declining collaboration and unclear mechanisms to enhance or sustain it. While TFI and some of its components have conducted selected strategic resource planning activities, TFI as a unit has not fully adopted key practices that enhance such efforts. For example, TFI and its components have produced multiple strategic planning documents in recent years, but the objectives in some of these documents are not clearly aligned with resources needed to achieve them. As a result, it may be unclear whether TFI has sufficient resources to address its objectives. Also, though TFI has undertaken some workforce planning activities, it lacks a process for performing comprehensive strategic workforce planning. Thus, it is unclear whether TFI is able to effectively address persistent workforce challenges. Also, TFI has not yet developed appropriate performance measures, changing their number and substance each year. Though TFI's current measures fully address many attributes of effective performance measures, they do not cover all TFI core program activities. TFI officials acknowledge the need for improvement and have worked since 2007 to develop one overall performance measure to assess TFI. Yet questions remain about when TFI will implement its new measure and whether it will effectively gauge TFI's performance. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
DHS has increased its global outreach efforts. Historically, DHS and its components, working with State, have coordinated with foreign partners on an ongoing basis to promote aviation security enhancements through ICAO and other multilateral and bilateral outreach efforts. For example, DHS and TSA have coordinated through multilateral groups such as the European Commission and the Quadrilateral Group—comprising the United States, the EU, Canada, and Australia—to establish agreements to develop commensurate air cargo security systems. On a bilateral basis, the United States has participated in various working groups to facilitate coordination on aviation security issues with several nations, such as those that make up the EU, Canada, and Japan. The United States has also established bilateral cooperative agreements to share information on security technology with the United Kingdom, Germany, France, and Israel, among others. In addition, TSA has finalized agreements with ICAO to provide technical expertise and assistance to ICAO in the areas of capacity building and security audits, and serves as the United States’ technical representative on ICAO’s Aviation Security Panel and the panel’s various Working Groups. In the wake of the December 2009 incident, DHS increased its outreach efforts. For example, to address security gaps highlighted by the December incident, DHS has coordinated with Nigeria to deploy Federal Air Marshals on flights operated by U.S. carriers bound for the United States from Nigeria. Further, in early 2010, the Secretary of Homeland Security participated in five regional summits—Africa, the Asia/Pacific region, Europe, the Middle East, and the Western Hemisphere—with the Secretary General of ICAO, foreign ministers and aviation officials, and international industry representatives to discuss current aviation security threats and develop an international consensus on the steps needed to address remaining gaps in the international aviation security system. Each of these summits resulted in a Joint Declaration on Aviation Security in which, generally, the parties committed to work through ICAO and on an individual basis to enhance aviation security. Subsequently, during the September 2010 ICAO Assembly, the 190 member states adopted a Declaration on Aviation Security, which encompassed the principles of the Joint Declarations produced by the five regional summits. Through the declaration, member states recognized the need to strengthen aviation security worldwide and agreed to take nine actions to enhance international cooperation to counter threats to civil aviation, which include, among other things strengthening and promoting the effective application of ICAO Standards and Recommended Practices, with particular focus on Annex 17, and developing strategies to address current and emerging threats; strengthening security screening procedures, enhancing human factors, and utilizing modern technologies to detect prohibited articles and support research and development of technology for the detection of explosives, weapons, and prohibited articles in order to prevent acts of unlawful interference; developing and implementing strengthened and harmonized measures and best practices for air cargo security, taking into account the need to protect the entire air cargo supply chain; and providing technical assistance to states in need, including funding, capacity building, and technology transfer to effectively address security threats to civil aviation, in cooperation with other states, international organizations and industry partners. TSA has increased coordination with foreign partners to enhance security standards and practices. In response to the August 2006 plot to detonate liquid explosives on board commercial air carriers bound for the United States, TSA initially banned all liquids, gels, and aerosols from being carried through the checkpoint and, in September 2006, began allowing passengers to carry on small, travel-size liquids and gels (3 fluid ounces or less) using a single quart-size, clear plastic, zip-top bag. In November 2006, in an effort to harmonize its liquid-screening standards with those of other countries, TSA revised its procedures to match those of other select nations. Specifically, TSA began allowing 3.4 fluid ounces of liquids, gels, and aerosols onboard aircraft, which is equivalent to 100 milliliters—the amount permitted by the EU and other countries such as Canada and Australia. This harmonization effort was perceived to be a success and ICAO later adopted the liquid, gels, and aerosol screening standards and procedures implemented by TSA and other nations as a recommended practice. TSA has also worked with foreign governments to draft international air cargo security standards. According to TSA officials, the agency has worked with foreign counterparts over the last 3 years to draft Amendment 12 to ICAO’s Annex 17, and to generate support for its adoption by ICAO members. The amendment, which was adopted by the ICAO Council in November 2010, will set forth new standards related to air cargo such as requiring members to establish a system to secure the air cargo supply chain (the flow of goods from manufacturers to retailers). TSA has also supported the International Air Transport Association’s (IATA) efforts to establish a secure supply chain approach to screening cargo for its member airlines and to have these standards recognized internationally. Moreover, following the October 2010 bomb attempt in cargo originating in Yemen, DHS and TSA, among other things, reached out to international partners, IATA, and the international shipping industry to emphasize the global nature of transportation security threats and the need to strengthen air cargo security through enhanced screening and preventative measures. TSA also deployed a team of security inspectors to Yemen to provide that country’s government with assistance and guidance on their air cargo screening procedures. In addition, TSA has focused on harmonizing air cargo security standards and practices in support of its statutory mandate to establish a system to physically screen 100 percent of cargo on passenger aircraft—including the domestic and inbound flights of United States and foreign passenger operations—by August 2010. In June 2010 we reported that TSA has made progress in meeting this mandate as it applies to domestic cargo, but faces several challenges in meeting the screening mandate as it applies to inbound cargo, related, in part, to TSA’s limited ability to regulate foreign entities. As a result, TSA officials stated that the agency would not be able to meet the mandate as it applies to inbound cargo by the August 2010 deadline. We recommended that TSA develop a plan, with milestones, for how and when the agency intends to meet the mandate as it applies to inbound cargo. TSA concurred with this recommendation and, in June 2010, stated that agency officials were drafting milestones as part of a plan that would generally require air carriers to conduct 100 percent screening by a specific date. At a November 2010 hearing before the Senate Committee on Commerce, Science, and Transportation, the TSA Administrator testified that TSA aims to meet the 100 percent screening mandate as it applies to inbound air cargo by 2013. In November 2010 TSA officials stated that the agency is coordinating with foreign countries to evaluate the comparability of their air cargo security requirements with those of the United States, including the mandated screening requirements for inbound air cargo on passenger aircraft. According to TSA officials, the agency has begun to develop a program that would recognize the air cargo security programs of foreign countries if TSA deems those programs provide a level of security commensurate with TSA’s programs. In total, TSA plans to coordinate with about 20 countries, which, according to TSA officials, were selected in part because they export about 90 percent of the air cargo transported to the United States on passenger aircraft. According to officials, TSA has completed a 6-month review of France’s air cargo security program and is evaluating the comparability of France’s requirements with those of the United States. TSA officials also said that, as of November 2010, the agency has begun to evaluate the comparability of air cargo security programs for the United Kingdom, Israel, Japan, Singapore, New Zealand, and Australia, and plans to work with Canada and several EU countries in early 2011. TSA expects to work with the remaining countries through 2013. TSA is working with foreign governments to encourage the development and deployment of enhanced screening technologies. TSA has also coordinated with foreign governments to develop enhanced screening technologies that will detect explosive materials on passengers. According to TSA officials, the agency frequently exchanges information with its international partners on progress in testing and evaluating various screening technologies, such as bottled-liquid scanner systems and advanced imaging technology (AIT). In response to the December 2009 incident, the Secretary of Homeland Security has emphasized through outreach efforts the need for nations to develop and deploy enhanced security technologies. Following TSA’s decision to accelerate the deployment of AIT in the United States, the Secretary has encouraged other nations to consider using AIT units to enhance the effectiveness of passenger screening globally. As a result, several nations, including Australia, Canada, Finland, France, the Netherlands, Nigeria, Germany, Poland, Japan, Ukraine, Russia, Republic of Korea, and the UK, have begun to test or deploy AIT units or have committed to deploying AITs at their airports. For example, the Australian Government has committed to introducing AIT at international terminals in 2011. Other nations, such as Argentina, Chile, Fiji, Hong Kong, India, Israel, Kenya, New Zealand, Singapore, and Spain are considering deploying AIT units at their airports. In addition, TSA hosted an international summit in November 2010 that brought together approximately 30 countries that are deploying or considering deploying AITs at their airports to discuss AIT policy, protocols, best practices, as well as safety and privacy concerns. However, as discussed in our March 2010 testimony, TSA’s use of AIT has highlighted several challenges relating to privacy, costs, and effectiveness that remain to be addressed. For example, because the AIT presents a full-body image of a person during the screening process, concerns have been expressed that the image is an invasion of privacy. Furthermore, as noted in our March 2010 testimony, it remains unclear whether the AIT would have been able to detect the weapon used in the December 2009 incident based on the preliminary TSA information we have received. We will continue to explore these issues as part of our ongoing review of TSA’s AIT deployment, and expect the final report to be issued in the summer of 2011. TSA conducts foreign airport assessments. TSA efforts to assess security at foreign airports—airports served by U.S. aircraft operators and those from which foreign air carriers operate service to the United States—also serve to strengthen international aviation security. Through TSA’s foreign airport assessment program, TSA utilizes select ICAO standards to assess the security measures used at foreign airports to determine if they maintain and carry out effective security practices. TSA also uses the foreign airport assessment program to help identify the need for, and secure, aviation security training and technical assistance for foreign countries. In addition, during assessments, TSA provides on-site consultations and makes recommendations to airport officials or the host government to immediately address identified deficiencies. In our 2007 review of TSA’s foreign airport assessment program, we reported that of the 128 foreign airports that TSA assessed during fiscal year 2005, TSA found that 46 (about 36 percent) complied with all ICAO standards, whereas 82 (about 64 percent) did not meet at least one ICAO standard. In our 2007 review we also reported that TSA had not yet conducted its own analysis of its foreign airport assessment results, and that additional controls would help strengthen TSA’s oversight of the program. Moreover, we reported, among other things, that TSA did not have controls in place to track the status of scheduled foreign airport assessments, which could make it difficult for TSA to ensure that scheduled assessments are completed. We also reported that TSA did not consistently track and document host government progress in addressing security deficiencies identified during TSA airport assessments. As such, we made several recommendations to help TSA strengthen oversight of its foreign airport assessment program, including, among other things, that TSA develop controls to track the status of foreign airport assessments from initiation through completion; and develop a standard process for tracking and documenting host governments’ progress in addressing security deficiencies identified during TSA assessments. TSA agreed with our recommendations and provided plans to address them. Near the end of our 2007 review, TSA had begun work on developing an automated database to track airport assessment results. In September 2010 TSA officials told us that they are now exploring ways to streamline and standardize that automated database, but will continue to use it until a more effective tracking mechanism can be developed and deployed. We plan to further evaluate TSA’s implementation of our 2007 recommendations during our ongoing review of TSA’s foreign airport assessment program, which we plan to issue in the fall of 2011. A number of key challenges, many of which are outside of DHS’s control, could impede its ability to enhance international aviation security standards and practices. Agency officials, foreign country representatives, and international association stakeholders we interviewed said that these challenges include, among other things, nations’ voluntary participation in harmonization efforts, differing views on aviation security threats, varying global resources, and legal and cultural barriers. According to DHS and TSA officials, these are long-standing global challenges that are inherent in diplomatic processes such as harmonization, and will require substantial and continuous dialogue with international partners. As a result, according to these officials, the enhancements that are made will likely occur incrementally, over time. Harmonization depends on voluntary participation. The framework for developing and adhering to international aviation standards is based on voluntary efforts from individual states. While TSA may require that foreign air carriers with operations to, from, or within the United States comply with any applicable U.S. emergency amendments to air carrier security programs, foreign countries, as sovereign nations, generally cannot be compelled to implement specific aviation security standards or mutually accept other countries’ security measures. International representatives have noted that national sovereignty concerns limit the influence the United States and its foreign partners can have in persuading any country to participate in international harmonization efforts. As we reported in 2007 and 2010, participation in ICAO is voluntary. Each nation must initiate its own involvement in harmonization, and the United States may have limited influence over its international partners. Countries view aviation security threats differently. As we reported in 2007 and 2010, some foreign governments do not share the United States government’s position that terrorism is an immediate threat to the security of their aviation systems, and therefore may not view international aviation security as a priority. For example, TSA identified the primary threats to inbound air cargo as the introduction of an explosive device in cargo loaded on a passenger aircraft, and the hijacking of an all-cargo aircraft for its use as a weapon to inflict mass destruction. However, not all foreign governments agree that these are the primary threats to air cargo or believe that there should be a distinction between the threats to passenger air carriers and those to all-cargo carriers. According to a prominent industry association as well as foreign government representatives with whom we spoke, some countries view aviation security enhancement efforts differently because they have not been a target of previous aviation-based terrorist incidents, or for other reasons, such as overseeing a different airport infrastructure with fewer airports and less air traffic. Resource availability affects security enhancement efforts. In contrast to more developed countries, many less developed countries do not have the infrastructure or financial or human resources necessary to enhance their aviation security programs. For example, according to DHS and TSA officials, such countries may find the cost of purchasing and implementing new aviation security enhancements, such as technology, to be prohibitive. Additionally, some countries implementing new policies, practices, and technologies may lack the human resources—for example, trained staff—to implement enhanced security measures and oversee new aviation security practices. Some foreign airports may also lack the infrastructure to support new screening technologies, which can take up a large amount of space. These limitations are more common in less developed countries, which may lack the fiscal and human resources necessary to implement and sustain enhanced aviation security measures. With regard to air cargo, TSA officials also cautioned that if TSA were to impose strict cargo screening standards on all inbound cargo, it is likely many nations would be unable to meet the standards in the near term. Imposing such screening standards in the near future could result in increased costs for international passenger travel and for imported goods, and possible reductions in passenger traffic and foreign imports. According to TSA officials, strict standards could also undermine TSA’s ongoing cooperative efforts to develop commensurate security systems with international partners. To help address the resource deficit and build management capacity in other nations, the United States provides aviation security assistance— such as training and technical assistance—to other countries. TSA, for example, works in various ways with State and international organizations to provide aviation security assistance to foreign partners. In one such effort, TSA uses information from the agency’s foreign airport assessments to identify a nation’s aviation security training needs and provide support. In addition, TSA’s Aviation Security Sustainable International Standards Team (ASSIST), comprised of security experts, conducts an assessment of a country’s aviation security program at both the national and airport level and, based on the results, suggests action items in collaboration with the host nation. State also provides aviation security assistance to other countries, in coordination with TSA and foreign partners through its Anti- Terrorism Assistance (ATA) program. Through this program, State uses a needs assessment—a snapshot of a country’s antiterrorism capability—to evaluate prospective program participants and provide needed training, equipment, and technology in support of aviation security, among other areas. State and TSA officials have acknowledged the need to develop joint coordination procedures and criteria to facilitate identification of global priorities and program recipients. We will further explore TSA and State efforts to develop mechanisms to facilitate interagency coordination on capacity building through our ongoing work. Legal and cultural factors can also affect harmonization. Legal and cultural differences among nations may hamper DHS’s efforts to harmonize aviation security standards. For example, some nations, including the United States, limit, or even prohibit the sharing of sensitive or classified information on aviation security procedures with other countries. Canada’s Charter of Rights and Freedoms, which limits the data it can collect and share with other nations, demonstrates one such impediment to harmonization. According to TSA officials, the United States has established agreements to share sensitive and classified information with some countries; however, without such agreements, TSA is limited in its ability to share information with its foreign partners. Additionally, the European Commission reports that several European countries, by law, limit the exposure of persons to radiation other than for medical purposes, a potential barrier to acquiring some passenger screening technologies, such as AIT. Cultural differences also serve as a challenge in achieving harmonization because aviation security standards and practices that are acceptable in one country may not be in another. For example, international aviation officials explained that the nature of aviation security oversight varies by country—some countries rely more on trust and established working relationships to facilitate security standard compliance than direct government enforcement. Another example of a cultural difference is the extent to which countries accept the images AIT units produce. AIT units produce a full-body image of a person during the screening process; to varying degrees, governments and citizens of some countries, including the United States, have expressed concern that these images raise privacy issues. TSA is working to address this issue by evaluating possible display options that would include a “stick figure” or “cartoon-like” form to provide enhanced privacy protection to the individual being screened while still allowing the unit operator or automated detection algorithms to detect possible threats. Other nations, such as the Netherlands, are also testing the effectiveness of this technology. Although DHS has made progress in its efforts to harmonize international aviation security standards and practices in key areas such as passenger and air cargo screening, officials we interviewed said that there remain areas in which security measures vary across nations and would benefit from harmonization efforts. For example, as we reported in 2007, the United States requires all passengers on international flights who transfer to connecting flights at United States airports to be rescreened prior to boarding their connecting flight. In comparison, according to EU and ICAO officials, the EU has implemented “one-stop security,” allowing passengers arriving from EU and select European airports to transfer to connecting flights without being rescreened. Officials and representatives told us that although there has been ongoing international discussion on how to more closely align security measures in these and other areas, additional dialogue is needed for countries to better understand each others’ perspectives. According to the DHS officials and foreign representatives with whom we spoke, these and other issues that could benefit from harmonization efforts will continue to be explored through ongoing coordination with ICAO and through other multilateral and bilateral outreach efforts. Our 2007 review of TSA’s foreign airport assessment program identified challenges TSA experienced in assessing security at foreign airports against ICAO standards and recommended practices, including a lack of available inspector resources and host government concerns, both of which may affect the agency’s ability to schedule and conduct assessments for some foreign airports. We reported that TSA deferred 30 percent of its scheduled foreign airport visits in 2005 due to the lack of available inspectors, among other reasons. TSA officials said that in such situations they sometimes used domestic inspectors to conduct scheduled foreign airport visits, but also stated that the use of domestic inspectors was undesirable because these inspectors lacked experience conducting assessments in the international environment. In September 2010 TSA officials told us that they continue to use domestic inspectors to assist in conducting foreign airport assessments and air carrier inspections— approximately 50 domestic inspectors have been trained to augment the efforts of international inspectors. We also previously reported that representatives of some foreign governments consider TSA’s foreign airport assessment program an infringement of their authority to regulate airports and air carriers within their borders. Consequently, foreign countries have withheld access to certain types of information or denied TSA access to areas within an airport, limiting the scope of TSA’s assessments. We plan to further assess this issue, as well as other potential challenges, as part of our ongoing review of TSA’s foreign airport assessment program, which we plan to issue in the fall of 2011. Mr. Chairman, this completes my prepared statement. I look forward to responding to any questions you or other members of the committee may have at this time. For additional information about this statement, please contact Stephen M. Lord at (202) 512-4379 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contact named above, staff who made key contributions to this statement were Steve D. Morris, Assistant Director; Carissa D. Bryant; Christopher E. Ferencik; Amy M. Frazier; Barbara A. Guffy; Wendy C. Johnson; Stanley J. Kostyla; Thomas F. Lombardi; Linda S. Miller; Matthew M. Pahl; Lisa A. Reijula; Rebecca Kuhlmann Taylor; and Margaret A. Ullengren. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The attempted December 25, 2009, terrorist attack and the October 2010 bomb attempt involving air cargo originating in Yemen highlight the ongoing threat to aviation and the need to coordinate security standards and practices to enhance security with foreign partners, a process known as harmonization. This testimony discusses the Department of Homeland Security's (DHS) progress and challenges in harmonizing international aviation security standards and practices and facilitating compliance with international standards. This testimony is based on reports GAO issued from April 2007 through June 2010, and ongoing work examining foreign airport assessments. For this work, GAO obtained information from DHS and the Transportation Security Administration (TSA) and interviewed TSA program officials, foreign aviation officials, representatives from international organizations such as the International Civil Aviation Organization (ICAO), and industry associations, about ongoing harmonization and TSA airport assessment efforts and challenges. In the wake of the December 2009 terrorist incident, DHS and TSA have strived to enhance ongoing efforts to harmonize international security standards and practices through increased global outreach, coordination of standards and practices, use of enhanced technology, and assessments of foreign airports. For example, in 2010 the Secretary of Homeland Security participated in five regional summits aimed at developing an international consensus to enhance aviation security. In addition, DHS and TSA have coordinated with foreign governments to harmonize air cargo security practices to address the statutory mandate to screen 100 percent of air cargo transported on U.S.-bound passenger aircraft by August 2010, which TSA aims to meet by 2013. Further, in the wake of the December 2009 incident, the Secretary of Homeland Security has encouraged other nations to consider using advanced imaging technology (AIT), which produces an image of a passenger's body that screeners use to look for anomalies such as explosives. As a result, several nations have begun to test and deploy AIT or have committed to deploying AIT units at their airports. Moreover, following the October 2010 cargo bomb attempt, TSA also implemented additional security requirements to enhance air cargo security. To facilitate compliance with international security standards, TSA assesses the security efforts of foreign airports as defined by ICAO international aviation security standards. In 2007, GAO reported, among other things, that TSA did not always consistently track and document host government progress in addressing security deficiencies identified during foreign airport assessments and recommended that TSA track and document progress in this area. DHS and TSA have made progress in their efforts to enhance international aviation security through these harmonization efforts and related foreign airport assessments; however, a number of key challenges, many of which are beyond DHS's control, exist. For example, harmonization depends on the willingness of sovereign nations to voluntarily coordinate their aviation security standards and practices. In addition, foreign governments may view aviation security threats differently, and therefore may not consider international aviation security a high priority. Resource availability, which is a particular concern for developing countries, as well as legal and cultural factors may also affect nations' security enhancement and harmonization efforts. In addition to challenges facing DHS's harmonization efforts, in 2007 GAO reported that TSA experienced challenges in assessing foreign airport security against international standards and practices, such as a lack of available international inspectors and concerns host governments had about being assessed by TSA, both of which may affect the agency's ability to schedule and conduct assessments for some foreign airports. GAO is exploring these issues as part of an ongoing review of TSA's foreign airport assessment program, which GAO plans to issue in the fall of 2011. In response to prior GAO recommendations that TSA, among other things, track the status of foreign airport assessments, DHS concurred and is working to address the recommendations. TSA provided technical comments on a draft of the information contained in this statement, which GAO incorporated as appropriate. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Over the last few decades, the number of participants and the complexity of the market for home mortgage loans in the United States has increased. In the past, a borrower seeking credit for a home purchase would typically obtain financing from a local financial institution, such as a bank, a savings association, or a credit union. This institution would normally hold the loan as an interest-earning asset in its portfolio. All activities associated with servicing the loan including accepting payments, initiating collection actions for delinquent payments, and conducting foreclosure if necessary would have been performed by the originating institution. Over the last few decades, however, the market for mortgages has changed. Now, institutions that originate home loans generally do not hold such loans as assets on their balance sheets but instead sell them to others. Among the largest purchasers of home mortgage loans are Fannie Mae and Freddie Mac, but prior to the surge in mortgage foreclosures that began in late 2006 and continues today, private financial institutions also were active buyers from 2003 to 2006. Under a process known as securitization, the GSEs and private firms then typically package these loans into pools and issue securities known as mortgage-backed securities (MBS) that pay interest and principal to their investors, which included other financial institutions, pension funds, or other institutional investors. As shown in figure 1, as of June 30, 2010, banks and other depository institutions that originate and hold mortgages accounted for about 28 percent of all U.S. mortgage debt outstanding. Over 50 percent of the mortgage debt was owned or in MBS issued by one of the housing GSEs or covered by a Ginnie Mae guarantee. About 13 percent were in MBS issued by non-GSEs—known as private-label securities, with the remaining 5 percent being held by other entities, including life insurance companies. With the increased use of securitization for mortgages, multiple entities now perform specific roles regarding the loans, including the mortgage servicer, a trustee for the securitized pool, and the investors of the MBS that were issued based on the pooled loans. After a mortgage originator sells its loans to another investor or to an institution that will securitize them, another financial institution or other entity is usually appointed as the servicer to manage payment collections and other activities associated with these loans. Mortgage servicers, which can be large mortgage finance companies or commercial banks, earn a fee for acting as the servicing agent on behalf of the owner of a loan. In some cases, the servicer is the same institution that originated the loan and, in other cases, it may be a different institution. The duties of servicers for loans securitized into MBS are specified in a contract with investors called a pooling and servicing agreement (PSA) and are generally performed in accordance with certain industry-accepted servicing practices—such as those specified in the servicing guidelines issued by the GSEs. Servicing duties can involve sending borrowers monthly account statements, answering customer service inquiries, collecting monthly mortgage payments, maintaining escrow accounts for property taxes and hazard insurance, and forwarding proper payments to the mortgage owners. In exchange for providing these services, the servicer collects a servicing fee, usually based on a percentage of at least 0.25 percent, of the loans’ unpaid principal balance annually. In the event that a borrower becomes delinquent on loan payments, servicers also initiate and conduct foreclosures in order to obtain the proceeds from the sale of the property on behalf of the owners of the loans, but servicers typically do not receive a servicing fee on delinquent loans. When loans are sold, they are generally packaged together in pools and held in trusts pursuant to the terms and conditions set out in the underlying PSA. These pools of loans are the assets backing the securities that are issued and sold to investors in the secondary market. Another entity will act as trustee for the securitization trust. Trustees act as asset custodians on behalf of the trust, keeping records of the purchase and receipt of the MBS and holding the liens of the mortgages that secure the investment. Trustees are also the account custodians for the trust—pass- through entities that receive mortgage payments from servicers and disperse them among investors according to the terms of the PSA. Although trustees are the legal owners of record of the mortgage loans on behalf of the trust, they have neither an ownership stake nor a beneficial interest in the underlying loans of the securitization. However, any legal action a servicer takes on behalf of the trust, such as foreclosure, generally is brought in the name of the trustee. The beneficial owners of these loans are investors in MBS, typically large institutions such as pension funds, mutual funds, and insurance companies. Figure 2 shows how the mortgage payments of borrowers whose loans have been securitized flow to mortgage servicers and are passed to the trust for the securitized pool. The trustee then disburses the payments made to the trust to each of the investors in the security. The mortgage market has four major segments that are defined, in part, by the credit quality of the borrowers and the types of mortgage institutions that serve them. Prime—Serves borrowers with strong credit histories and provides the most attractive interest rates and mortgage terms. This category includes borrowers who conform to the prime loan standards of either Fannie Mae or Freddie Mac and are borrowing an amount above the GSE federally mandated upper limit, known as “jumbo loans.” Nonprime—Encompasses two categories of loans: Alt-A—Generally serves borrowers whose credit histories are close to prime, but loans have one or more high-risk features such as limited documentation of income or assets or the option of making monthly payments that are lower than required for a fully amortizing loan. Subprime—Generally serves borrowers with blemished credit and features low down payments and higher interest rates and fees than the prime market. borrowers who may have difficulty qualifying for prime mortgages but features interest rates competitive with prime loans in return for payment of insurance premiums or guarantee fees. The Federal Housing Administration and Department of Veterans Affairs operate the two main federal programs that insure or guarantee mortgages. Across all of these market segments, two types of loans are common: fixed-rate mortgages, which have interest rates that do not change over the life of the loan; and adjustable-rate mortgages (ARM), which have interest rates that can change periodically based on changes in a specified index. The nonprime market segment recently featured a number of nontraditional products. For example, the interest rate on Hybrid ARM loans is fixed during an initial period then “resets” to an adjustable rate for the remaining term of the loan. Another type of loan, payment-option ARM loans, allowed borrowers to choose from multiple payment options each month, which may include minimum payments lower than what would be needed to cover any of the principal or all of the accrued interest. This feature is known as “negative amortization” because the outstanding loan balance may increase over time as any interest not paid is added to the loan’s unpaid principal balance. If a borrower defaults on a mortgage loan secured by the home, the mortgage owner is entitled to pursue foreclosure to obtain title to the property in order to sell it to repay the loan. The mortgage owner or servicer generally initiates foreclosure once the loan becomes 90 days or more delinquent. Once the borrower is in default, the servicer must decide whether to pursue a home retention workout or other foreclosure alternative or to initiate foreclosure. State foreclosure laws establish certain procedures that mortgage servicers must follow in conducting foreclosures and establish minimum time periods for various aspects of the foreclosure process. These laws and their associated timelines may vary widely by state. As shown in figure 3, states generally follow one of two methods for their foreclosure process: judicial, with a judge presiding over the process in a court proceeding, or statutory, with the process proceeding outside the courtroom in accordance with state law. Because of the additional legal work, foreclosure generally takes longer and is more costly to complete in the states that primarily follow a judicial foreclosure process. Several federal agencies share responsibility for regulating the banking industry and securities markets in relation to the origination and servicing of mortgage loans. Chartering agencies oversee federally and state- chartered banks and their mortgage lending subsidiaries. At the federal level, OCC oversees federally chartered banks. OTS oversees savings associations (including mortgage operating subsidiaries). The Federal Reserve oversees insured state-chartered member banks, while FDIC oversees insured state-chartered banks that are not members of the Federal Reserve System. Both the Federal Reserve and FDIC share oversight with the state regulatory authority that chartered the bank. The Federal Reserve also has general authority over lenders that may be owned by federally regulated holding companies but are not federally insured depository institutions. Many federally regulated bank holding companies that have insured depository subsidiaries, such as national or state-chartered banks, also may have nonbank subsidiaries, such as mortgage finance companies. Under the Bank Holding Company Act of 1956, as amended, the Federal Reserve has jurisdiction over such bank holding companies and their nonbank subsidiaries that are not regulated by another functional regulator. Other regulators are also involved in U.S. mortgage markets. For example, Fannie Mae’s and Freddie Mac’s activities are overseen by the Federal Housing Finance Agency. Staff from the Securities and Exchange Commission also review the filings made by private issuers of MBS. Federal banking regulators have responsibility for ensuring the safety and soundness of the institutions they oversee and for promoting stability in the financial markets and enforcing compliance with applicable consumer protection laws. To achieve these goals, regulators establish capital requirements for banks, conduct on-site examinations and off-site monitoring to assess their financial condition, and monitor their compliance with applicable banking laws, regulations, and agency guidance. Among the laws that apply to residential mortgage lending and servicing are the Fair Housing and Equal Credit Opportunity Acts, which address credit granting and ensuring non-discrimination in lending; the Truth in Lending Act (TILA), which addresses disclosure requirements for consumer credit transactions; and the Real Estate Settlement Procedures Act of 1974 (RESPA), which requires transparency in mortgage closing documents. Entities that service mortgage loans that are not depository institutions are called nonbank servicers. In some cases these nonbank servicers are subsidiaries of banks or other financial institutions, but some are also not affiliated with financial institutions at all. Nonbank servicers have historically been subject to little or no direct oversight by federal regulators. We have previously reported that state banking regulators oversee independent lenders and mortgage servicers by generally requiring business licenses that mandate meeting net worth, funding, and liquidity thresholds. The Federal Trade Commission is responsible for enforcing certain federal consumer protection laws for brokers and lenders that are not depository institutions, including state-chartered independent mortgage lenders. However, the Federal Trade Commission is not a supervisory agency; instead, it enforces various federal consumer protection laws through enforcement actions when complaints by others are made to it. Using data from large and subprime servicers and government-sponsored mortgage entities representing nearly 80 percent of mortgages, we estimated that abandoned foreclosures are rare—the total from January 2008 to March 2010 represents less than 1 percent of vacant homes. When servicers’ efforts to work out repayment plans or loan modifications with borrowers who are delinquent on their loans are exhausted, staff from the six servicers we interviewed said they analyze certain loans to determine whether foreclosure will be financially beneficial. Based on our analysis of loan data provided by these six servicers covering the period of January 2008 through March 2010, servicers most often made this decision before initiating foreclosure, but in many cases did not discover that foreclosure would not be financially beneficial until after initiating the process. While we estimated that instances in which servicers initiate but then abandon a foreclosure without selling or taking ownership of a property had not occurred frequently across the United States, certain communities experienced larger numbers of such abandoned foreclosures. Specifically, we found abandoned foreclosures tended to be for properties in economically distressed communities and low-value properties and nonprime and securitized loans. When borrowers default on their loans, home mortgage loan servicers take a variety of actions in an effort to keep them in their homes, by, for example, working out repayment plans and loan modifications. The stakeholders that we interviewed—including servicers, regulators, and government and community officials—agreed that pursuing efforts to keep borrowers in their homes were preferable to foreclosure. According to servicers’ representatives, servicers engage in various efforts to reach borrowers during the delinquency period through letters, phone calls, and personal visits. For example, representatives of one servicer noted that on a typical foreclosure company representatives make over 120 phone calls and send 10 to 12 inquiries to borrowers in an effort to bring payments up to date or modify the loan. As borrower outreach continues, servicers also send “breach” letters after borrowers have missed a certain number of payments warning borrowers of the possibility of foreclosure. However, if these initial efforts to bring the borrower back to a paying status are not successful, staff from the six servicers we contacted— representing about 57 percent of U.S. first-lien mortgages—told us they typically determine whether to initiate foreclosure as a routine part of their collections and loss mitigation process after a loan has been delinquent for at least 90 days. Representatives of servicers told us that they might decide to initiate foreclosure even though they were still pursuing loan workout options with a borrower. One noted that the initiation of foreclosure, in certain instances, might serve as an incentive for the borrower to begin making mortgage payments again. According to the staff of the six servicers we interviewed, they usually conduct an analysis of certain loans in their servicing portfolio before initiating foreclosure to determine if foreclosure will be financially beneficial. These analyses—often called an equity analysis—compare the projected value the property might realize in a subsequent sale against the sum of all projected costs associated with completing the foreclosure and holding the property until it can be sold. Servicers use the results of these equity analyses to decide whether to foreclose on a loan or conduct a charge-off in lieu of a foreclosure. In general, if the equity analysis indicates that the projected proceeds from the eventual sale of the property exceeds that of the projected costs of reaching that sale by a certain amount, the servicer will proceed with the foreclosure. However, when the costs of foreclosure exceed the expected proceeds from selling the property, servicers typically decide that foreclosure is not financially beneficial. In these cases servicers will usually cease further foreclosure- related actions, operationally charging off the loan from its servicing roles, and advising the mortgage owner—GSEs or other private securitized trusts—that the loan should be acknowledged as a loss by the loan’s owner. In determining which loans to charge off in lieu of foreclosure, some servicers maintain thresholds for property values or potential income from pursuing foreclosure. For example, some of the servicers we interviewed told us that they usually, but not always, considered charge- offs in lieu of foreclosure on properties with values roughly below $10,000 to $30,000. Freddie Mac servicing guidance requires a review for charge off in lieu of foreclosure when the unpaid principal balance of a loan is below $5,000 on conventional mortgages or less than $2,000 on government insured or guaranteed loans, such as Federal Housing Administration or Department of Veterans Affairs mortgages. Based on our reviews of bank regulatory guidance and discussions with federal and state officials, no laws or regulations exist that require servicers to complete foreclosure once the process has been initiated. Therefore, servicers can abandon the foreclosure process at any point. Furthermore, according to staff from the servicers we interviewed, initiating foreclosure can cost as little as $1,000, and these costs may be recovered from the proceeds of any subsequent sale of the property. Based on our analysis of servicer data, servicers most often charged off loans in lieu of foreclosure without initiating foreclosure proceedings. However, in many cases the decisions to charge off loans in lieu of foreclosure were made after foreclosure initiation, and a significant portion of these represented abandoned foreclosures. We obtained data from six servicers including four of the largest servicers and two servicers that specialized in nonprime loans. These six servicers collectively serviced about 30 million loans, representing 57 percent of outstanding first-lien home mortgage loans as of the end of 2009. According to our analysis of the servicer-reported data, these six servicers decided to conduct charge-offs in lieu of foreclosure for approximately 46,000 loans between January 2008 and March 2010, as shown in table 1. For over 27,600 loans, or about 60 percent, the servicers made the decision to charge off in lieu of foreclosure without initiating foreclosure proceedings. Of these loans, over 19,400, or 70 percent of the properties, were occupied by the borrower or a tenant. As will be discussed later in this report, when properties remain occupied they are less likely to contribute to problems in their neighborhoods generally associated with foreclosed and vacant properties. However, in other cases, servicers initiated foreclosure but later decided to conduct a charge-off in lieu of foreclosure. Charge-offs in lieu of foreclosure that occurred after a foreclosure was initiated were more likely to result in a vacant property than charge-offs that occurred without a foreclosure initiation. As shown in table 1 earlier, these six servicers initiated foreclosure on over 18,300 loans between January 2008 and March 2010 that they later decided to charge off in lieu of foreclosure. For over 8,700, or 48 percent of these loans, this decision was associated with a vacancy and, therefore, an abandoned foreclosure–that is, a property for which foreclosure was initiated but not completed and is vacant. We found a statistically significant association between foreclosure initiation and vacancy for the charge-offs in lieu of foreclosure in our sample. That is, we found that initiating and then suspending foreclosure was associated with a higher probability that a property will be vacant. A potential reason that vacancies occur more frequently when servicers decide to pursue a charge-off in lieu of foreclosure after initiating foreclosure than before is confusion among borrowers about the impact of the foreclosure initiation. Specifically, local and state officials, community groups, and academics told us that borrowers may be confused about their rights to remain in their homes during foreclosure and vacate the home before the process is completed. Alternatively, servicers could be more likely to pursue a charge-off in lieu of foreclosure if a property becomes vacant before foreclosure initiation since the value of the property may deteriorate rapidly. Nevertheless, as the data show even when servicers opt to conduct a charge-off in lieu of foreclosure before initiating foreclosure, some borrowers may still vacate the home. Anecdotally, we heard from a variety of stakeholders that this decision could be due to financial hardship or pressure exerted by the lender in collecting delinquent mortgage payments, among other reasons. Data indicating the overall number of abandoned foreclosures in the United States did not exist nor was such information being collected by the federal government agencies we contacted or by organizations in the states or local communities that we reviewed. Local governments, bank regulators, and private organizations collect information on foreclosures, vacancies, and housing market conditions, but for various reasons the phenomenon of abandoned foreclosures goes largely unrecorded. Local officials we spoke with in Baltimore, Chicago, Cleveland, Detroit, and Lowell, Massachusetts, identified similar difficulties in tracking abandoned foreclosures. For example: Accurately identifying the lender and borrower on a given property is often difficult due to outdated or incorrect mortgage information. Ascertaining which properties are abandoned foreclosures is often difficult because formal data on the foreclosure status of properties often do not exist. Determining whether properties are actually vacant is often difficult if a house has been used seasonally or as a rental. Nonetheless, researchers in some cities we visited are attempting to compile data. In Cleveland, academic researchers have used court documents in an attempt to ascertain the reason a sample of foreclosure cases have stalled. In a number of cities, such as Chula Vista, California, the city governments have enacted ordinances that require lenders to register homes that become vacant. In Buffalo, a nonprofit organization has collected information on the status of foreclosure cases in Erie county, where Buffalo is located. Although subject to uncertainty, we estimated that the number of abandoned foreclosures that occurred in the United States between January 2008 and March 2010 was between approximately 14,500 and 34,600. As will be discussed, although the potential number of abandoned foreclosures creates significant problems for certain communities, they represent less than 1 percent of vacant properties and an even smaller percentage of the total housing stock. Table 2 shows abandoned foreclosures as a percent of various housing market metrics. To determine the prevalence of abandoned foreclosures in the entire U.S. market, we estimated the number of properties (1) were charged off in lieu of foreclosure after a foreclosure was initiated and (2) that are vacant. In developing our estimate, we used the data from the six mortgage servicers and data from Fannie Mae and Freddie Mac—which together represent roughly 80 percent of outstanding U.S. mortgages—and augmented this information with vacancy data from USPS. Using this information, we estimated the total number of abandoned foreclosures nationwide under varying assumptions about the remaining 20 percent of the mortgages outstanding. According to the data reported to us, abandoned foreclosures represent a small portion of overall vacancies in the United States, but are highly concentrated in a small number of communities. Based on our analysis of servicer data from January 2008 to March 2010, we found abandoned foreclosures in 2,452 of the approximately 43,000 postal zip codes throughout the country, but only 167 of those zip codes have 10 or more of these properties. From January 2008 through March 2010, several zip codes in Chicago, Cleveland, Detroit, Indianapolis, and other large cities had 35 or more abandoned foreclosures. We found several zip codes in Detroit that had over 100 abandoned foreclosures. In addition, several smaller areas contain zip codes with high concentrations of the properties, such as those including Toledo, Akron, and Youngstown, Ohio; Flint, Michigan; Fort Myers, Florida; and Gary and Fort Wayne, Indiana. Analyzing abandoned foreclosures at the U.S. Census-designated Metropolitan Statistical Area (MSA) level also suggests that such cases are likely to be concentrated in a limited number of communities. According to our analysis, 80 percent of the total abandoned foreclosures that we identified in our servicer data were in 50 of the roughly 400 MSAs; 20 MSAs account for 61 percent of the properties; and 30 MSAs account for 72 percent. Table 3 shows the MSAs with the most abandoned foreclosures. Because the data we used to produce these estimates may not be generalizeable, the location of the remaining abandoned foreclosures could differ from that suggested in table 3. For example, the Flint, Michigan; Orlando-Kissimmee, Florida; South Bend-Mishawaka, Indiana; and Canton-Massillon, Ohio, MSAs are notable examples just outside the top 20. Although not having a large number of abandoned foreclosures, some small MSAs throughout the Midwest are likely to be similarly challenged by the existence of such properties given their size. As shown above in table 3, these 20 MSAs had roughly 5,090 properties that were charged off in lieu of foreclosure by the servicer without initiating foreclosure but were also vacant in our sample. Because these also are properties on which the servicer will no longer be conducting any maintenance or attempting to sell to a new owner, the properties can create similar problems for their communities as those resulting from abandoned foreclosures. Certain community, property, and loan characteristics may help to explain some of the concentrations of abandoned foreclosures. In particular, based on our sample, abandoned foreclosures occured most frequently in economically struggling areas and distressed urban areas of particular cities We also found these properties in areas that experienced significant recent booms and declines in housing. In general abandoned foreclosures are also more likely to involve low-value properties and nonprime and securitized loans. Economically struggling cities appear to experience the greatest number of charge-offs in lieu of foreclosure and therefore, abandoned foreclosures. As shown in figure 4, most of the abandoned foreclosures have occurred in Midwestern industrial MSAs. In particular, our analysis of servicer data indicates that over 50 percent of all the abandoned foreclosures we identified were in Michigan, Indiana, and Ohio. Seven of the 20 MSAs with the most abandoned foreclosures are located in Ohio. Recent research also supports that this type of phenomenon is occurring largely in industrial Midwestern states. Although the deterioration of economic conditions in 2008 and 2009 has impacted the entire nation, these Midwestern areas have been especially hard hit with population declines, high unemployment, and decreases in housing values. For example, Detroit lost about 28 percent of its population from 1980 to 2006 and the unemployment rate in Michigan was 13.0 percent versus 9.6 percent nationally as of September 2010. According to a recent report, although Michigan did not seem to experience a dramatic appreciation in housing prices before the surge in mortgage foreclosures that began in late 2006, it did witness a significant decline in housing prices after 2006, largely because the automobile manufacturing industry was severely hit by the current crisis. Like many areas in the United States, several of the MSAs in table 3 experienced significant increases in unemployment rates. For example, the unemployment rate in the Detroit-Warren-Livonia MSA increased from 4.2 percent in December 2000 to 16.1 percent in December 2009. Similarly, in the Flint, Michigan, MSA, the unemployment rate increased by more than 10 percentage points between 2000 and 2009. High unemployment may have exacerbated the negative consequences of nonprime lending activity. For example, community development officials in Detroit explained that many people who did not have mortgages on their homes were enticed to obtain a home equity loan to make repairs, then lost their homes to foreclosure because they lost their jobs or the payments were not sustainable. However, many of the economic problems facing areas such as Cleveland, Detroit, and other Midwest cities where we identified large numbers of abandoned foreclosures predate the economic turmoil that started around 2008. For example, in 2007, the poverty rate in Flint, Michigan, was 16.8 percent, the poverty rate in Memphis, Tennessee, was 18.8 percent, and the poverty rates in both Toledo and Youngstown, Ohio, were 14.8 percent. Consequences of these challenges include weak real estate markets and other characteristics that are associated with abandoned foreclosures. Abandoned foreclosures are also likely concentrated in distressed urban areas. Our analysis shows that distressed urban areas within MSAs had significant numbers of abandoned foreclosures. In cities with high property values like Chicago, we found that abandoned foreclosures were largely driven by activity in a few zip codes. Our analysis also shows that, on average, the zip codes with the most abandoned foreclosures had larger declines in home prices (37 percent) compared to the national average of 32 percent following peak levels in 2005. Some distressed zip codes in Detroit, Michigan, had an over 60 percent drop in home prices from the peak levels between 2004 and 2006. Stakeholders also told us that abandoned foreclosures were most often associated with urban areas with largely minority populations, high foreclosure rates, blight, crime, and vandalism. For example, one academic speculated that there may be pockets of distressed housing in the inner parts of cities whose housing markets as a whole may not be so bad; these areas likely have low value houses that may end up as abandoned foreclosures. In addition, one servicer representative said that abandoned foreclosures could be found in the urban core of any large city. Concentrations of abandoned foreclosures have also occurred in areas that experienced significant house price increases followed by declines. States such as California, Florida, Nevada, and Arizona experienced the largest increase in property values prior to 2006 also have experienced the largest decreases in property values in the last few years. For example, according to a recent report, property values in these states spanned 47 percent from peak to trough. As a result, these states have many underwater borrowers—that is, borrowers who owe more on their mortgages than their properties are worth (negative equity). Significant overdevelopment and overspeculation prior to the economic crisis also may have caused investors to abandon their properties after housing prices declined. For example, representatives of a community group in Atlanta told us that starting in 2000 in a neighborhood close to downtown Atlanta investors increasingly constructed new housing on speculation. Representatives said that some of this new construction was never occupied, and after house prices began to decline in early 2007, much of it was vandalized. Without a market for these properties servicers may have subsequently abandoned foreclosures on many of these properties because they would not earn enough at foreclosure sale to cover losses associated with foreclosure and disposition of these properties. Among the 20 MSAs in table 3, Jacksonville, Cape Coral-Fort Myers, Tampa-St. Petersburg-Clearwater, Miami-Fort Lauderdale-Pompano Beach, and, to a lesser extent, Atlanta, appear to fit into the category of housing boom- related abandoned foreclosures. For example, according to Global Insight estimates, average home prices in the Miami-Fort Lauderdale-Pompano Beach increased 144 percent from the end of 2000 to the second quarter of 2007 before declining by 40 percent from 2007 to the third quarter of 2010. Regardless of the city or neighborhood, most abandoned foreclosures occur on low-value properties. Data from servicers, Fannie Mae, and Freddie Mac indicate that foreclosures are most often not completed on properties with low values. Evidence from the econometric model that we applied to GSE loan-level data also suggests that lower property values increased the likelihood that a loan would be charged off in lieu of foreclosure rather than being subject to alternative foreclosure actions such as a deed-in-lieu of foreclosure or short sale. For example, the median value of the properties Freddie Mac decided to charge of in lieu of foreclosure was $10,000 compared to $130,000 for deeds in lieu of foreclosure, $158,000 for modifications and $160,000 for short sales. Similarly, the median value of loans for which the six servicers decided to charge off in lieu of foreclosure in Michigan and Ohio was $25,000. In addition, servicer representatives told us properties with low values—such as those valued under $30,000—were the most likely candidates for decisions to not pursue foreclosure. Some properties may even have negative values because of the liabilities attached to them. For example, a property in Cleveland valued at $5,000 may have an $8,000 demolition lien levied against it; therefore, it may actually cost more to pay off the demolition lien than the property is worth. Abandoned foreclosures also occurred most frequently on nonprime loans. Our analysis shows that about 67 percent of all abandoned foreclosures that we identified were associated with nonprime loans. Adjustable rates were also a prominent feature of these loans. Anecdotally, stakeholders also told us that abandoned foreclosures most likely occurred on properties where borrowers had nonprime loans and unstable financing. For instance an official for a community development corporation in greater Cleveland told us he had seen about 12 instances of abandoned foreclosures in the past year, and many of the borrowers in these cases had two mortgages or subprime loans originated in 2003 or later. The vast majority of abandoned foreclosures were loans that involved third-party investors including those that were securitized into private label MBS. GSE-purchased loans account for a very small portion of our estimated number of abandoned foreclosures. Although the GSE loans made up roughly 63 percent of the data we collected from servicers, they accounted for less than 8 percent of the total abandoned foreclosures during 2008 through the first quarter of 2010. Similarly, we found that only about 0.3 percent of abandoned foreclosures were associated with FHA, VA, and Ginnie Mae insured loans. The potential for abandoned foreclosures to occur on loans associated with Fannie Mae also appears to have been reduced as Fannie Mae representatives told us that as of April 2010 they have instructed servicers to complete all foreclosures pending Fannie Mae’s revision of its charge-off in lieu of foreclosure procedures to make sound economic decisions as well as stabilize neighborhoods. About 66 percent of the total abandoned foreclosures were associated with non-GSE third-party investors. We estimate that a significant portion of these loans were securitized into residential MBS, although data issues precluded us from distinguishing between private label MBS and whole loans held by third parties in some cases. Abandoned foreclosures, similar to other vacant properties, further contribute to various negative impacts for the neighborhoods in which they occur, for the local governments, and for the homeowners. In addition, because local governments are not aware of servicers’ decisions to no longer pursue foreclosure on these properties, they cannot take expedited actions to return the properties to productive use. Properties for which the mortgage servicers have abandoned the foreclosure proceedings are often left without any party conducting routine care and maintenance, which often results in properties with poor appearance and sometimes unsafe conditions. As a result, abandoned foreclosures can create unsightly and dangerous properties that contribute to neighborhood decline. Academics, housing and community groups representatives, local government officials, and others in the 12 locations we collected information from generally told us that, like other vacant and abandoned properties, abandoned foreclosures often deteriorated quickly. They explained what types of damage can result, including structural damage, mold, broken windows, and trash, among other things. Representatives of a national community reinvestment organization described the impact of vacant homes nationwide, from swimming pools filled with dirty, discolored water in Florida to homes in the Midwest that have sustained damage from falling trees that no one removes. A Cleveland official said that, in a 2-year period, about 20 vacant homes in one ward had caught fire and that people used vacant properties to dump trash and asphalt. While touring abandoned foreclosures in some of the neighborhoods in the communities we visited, we observed several vacant and abandoned properties that showed various signs of property deterioration, including overgrown grass, accumulated trash or other debris, and broken windows. Because abandoned foreclosures, by definition, are vacant properties, they create similar problems as other vacant properties do for communities. Figure 5 presents pictures of abandoned foreclosures and other vacant properties in several of the communities we visited. Abandoned foreclosures also create problems in communities because homes in foreclosure proceedings that become vacant in certain neighborhoods are often quickly stripped of valuable materials, furthe depressing their value. Housing and community group representatives, a local government officials, told us that looters strip vacant houses of copper piping, wiring, appliances, cabinets, aluminum siding, and o ther valuables, usually within a few weeks of the time at which the property became vacant, but sometimes within 24 hours. An official from a foreclosure response organization in one Midwestern city told us that a thriving industry of home salvage thieves exists in the city and an official from a non-profit housing organization in another Midwestern city told u that junkyards in the area accept things they should not, such as aluminum siding and refrigerators—and this provides an incentive for criminals to strip houses of any materials of potential salvage value. Representatives from a national property maintenance company that operates across the country told us a house can be secured, including having its windows and doors boarded up and entrances locked, only to be broken into and stripped of any valuable parts. Similarly, a local official told us that many houses in Chicago are secured with steel grates, but vandals will bypass these and cut a hole in the roof or brick to gain access—and, once inside, they will rip the house apart by sawing into the walls and cutting out the wiring and piping. A local official in another city reported that several ga explosions have occurred at vacant properties there recently due to vandals stealing pipes while the gas was still flowing to the home. Staff from a national property maintenance company told us that mortgage servicers contract with them to inspect the properties of homeowners whose loans become delinquent and that in certain locations, they often have to re-secure properties at every monthly inspection because such properties are constantly being broken into and damaged. In addition, a code enforcement official told us that vandalism had become such an issue for the city that a sign left on a property’s door indicating that it had a code violation would serve as a flag to thieves to strip the house. F representatives from two national co told us that, as a result of vandalism, exposure, and neglect, vacant properties can become worthless. Similar to other vacant and uncared for properties, abandoned foreclosures also can create public safety concerns. Staff from an entity that advises local governments on community development explained abandoned foreclosures that remain vacant for extended periods pose significant public health, safety, and welfare issues at the local level. Although unable to identify which properties were abandoned foreclosures, local government officials in Detroit said that safety issues that associated with vacant properties were the primary reason they had identified 3,000 vacant properties that were to be demolished in 2010. Of these, they said that 2,100 had been deemed dangerous and that 400 were considered so hazardous that they were considered emergency situation noting that a firefighter had recently been killed when he entered a property and a floor caved in. Likewise in Fort Myers, Florida, officials told us that 1,200 to 1,300 of the city’s 1,600 vacant and abandoned properties were considered unsafe. A Cleveland official told us that, wh housing inspectors discovered a vacant property with a code violation, th city was compelled to act to address the potential danger, or it may be liable for any subsequent injuries. Officials from this same office further noted that the public money that is used to fund the land bank—which may take in unsafe and abandoned properties—may have otherwise been used for civic uses, such as teacher salaries. Like other vacant properties, abandoned foreclosures also contribute to neighborhood decline by providing venues for a wide variety of crimes. Local government and other officials told us that vacant and abandoned properties were subject to break-ins, drug activity, prostitution, arson, and squatting, among other things. A study of the City of Chicago noted that some vacant building fires were the result of arson by owners seeking to make insurance claims and that others were started by squatters making fires to keep warm. Other empirical studies have found relationships between vacant or foreclosed properties and crime. For example, a national organization representing municipal governments reports that crime is moderately correlated with vacant and abandoned properties, deteriorating housing and high divestments in the neighborhood. Another study of central city Chicago found that a 2.87 percentage point increase in the foreclosure rate would yield a 6.68 percent increase in the rate of violent crimes such as assault, robbery, rape, and murder. The author of this study explains the weaker positive relationship between foreclosure and property crimes, such as theft and vandalism, may be due to an under- reporting of such crimes in lower-income areas. Another impact of abandoned foreclosures is that, like other vacant and uncared for properties, they negatively affect the value of surrounding properties. Although property values have fallen sharply in many region around the country as part of the recent economic recession, man those we interviewed said that vacant properties and abandoned foreclosures compounded this problem. One local official explained th once a few properties in a neighborhood became vacant, the negative effects tended to spiral and lead to further foreclosures and vacancies, particularly in low-income neighborhoods. In addition, empirical s have found that vacant and abandoned properties, together with foreclosures, can cause neighboring property values to decline. For example, using data from 2006 in Columbus, Ohio, a recent study found that each vacant property within 250 feet of a nearby home could d h its sales price by about 3.5 percent, whereas the impact from eac foreclosure was less severe, but had a wider impact out into the neighborhood. In addition, an author for a federal research organization reviewed several research papers on foreclosure’s price-depressing impact on sales of nearby properties and reported that, according to the lite rature, this impac percent. t can range from as little as 0.9 percent to as much as 8.7 Because local government officials are not aware that foreclosure are no longer being pursued, these properties remain vacant and contribute to neighborhood decline for longer periods of time. Instead of actions learning that servicers are charging off loans in lieu of foreclosure a not assume responsibility for maintenance, local government staff responsible for enforcing housing codes told us they typically find out about vacant and abandoned properties through citizen complaints, vaca property registration ordinances, or on their own initiative. They noted that, by the time they become aware of a property for which a servicer is no longer taking responsibility, the property may have been vacant and deteriorating for months or years, which exacerbates the overall neighborhood decline. Several stakeholders noted that, if local governments were made aware of properties for which servicers were charging off the loans in lieu of foreclosure, they may be able to take mor timely action. For example, they could take expedited actions to acquire the vacant property—such as through the use of a land bank—and return it to productive use. Abandoned foreclosures also increase costs for local governments because they must expend resources to inspect properties and mitigate their unsafe conditions. Within local communities, code enforcemen t departments are largely responsible for ensuring that homeowners maintain their properties in accordance with local ordinances regarding acceptable appearance and safety. In cases in which such ordinances are not being complied with, code enforcement departments can typically f violating property owners or take actions themselves, such as making repairs or boarding up doors or windows and billing the property owner for these expended costs. However, code enforcement and other o told us that it is often difficult to locate the owners of abandoned foreclosures because they have left their homes; they also told us that it is difficult to locate current mortgage lien holders—who generally have interest in maintaining the properties. Officials said that one reason identifying lien holders is difficult is because they often fail to record changes in ownership with local jurisdictions. To address the challenge, the code enforcement manager of one of the cities we visited told us that he had made one of his field staff a full-time “foreclosure specialist” who job it was to research owners and lien holders of foreclosed properties with identified code violations. The new foreclosure specialist told us that he uses several different avenues to find property owners and lien h including county court records, local realtors, property manager property maintenance companies, and the Mortgage Electronic s, Registration Systems (MERS®). In addition, another code enforcement manager told us that he had developed a team of investigators train ed in skip tracing violators. to increase the division’s ability to identify and locate Local governments are often burdened by having to pay for the maintenance or demolition of abandoned foreclosures. In the interest of public safety, code enforcement departments will often take action when they cannot identify or contact another responsible party. Researchers tallied total costs of over $13 million for code enforcement activities to address and maintain all vacant and abandoned properties for eight Ohio cities in 2006. In addition, the City of Cleveland, Ohio, has budgeted over $8 million of federal grant money for demolition and has already expend nearly $5 million. Recent literature, as well as our interviews with local officials, further revealed the burden some local governments are experiencing due to an increase in the amount of vacant and abandoned properties: A 2005 report estimated the direct municipal costs of an abandoned foreclosure to be $19,227 in the City of Chicago with a fire, the cost can be as high as $34,199. —and if it is a severe case The same study reported that the cost of boarding up a single-family hom one time was $900, but noted that, because multiple times, the true cost was $1,445. In a 2008 study, the City of Baltimore report police and fire services showed an annual increase of $1,472 for each vacant and unsafe property on that block. Code enforcement officials for a city in Fl orida reported that they spent ver $120,000 to mow lawns of vacant properties in 2008; this was up from o less than $30,000 in 2006 and prior years. for another city in Florida told us they have 850,000 in outstanding code invoices for boarding up or mowing lawns $ for abandoned properties. Code enforcement officials for a county in Florida reported that prior to 2007, the number of code enforcement cases against properties in foreclosure was not significant enough to warrant tracking; however, in 2008, after the department began to identify and track these properties because of the noticeable increase in citizen complaints, statistics reveal at 25 percent of all their cases involved properties in foreclosure—and th as of May 2010, they had 443 active cases against properties in foreclosure. A Cleveland official reported an approximately $80,000 increase in rior year. She said these osts were related to hiring additional staff to support existing staff with personnel costs for code enforcement over the p c research, documentation, and court testimony. When local governments maintain or demolish properties, they typically may place liens against the properties for the associated costs. I jurisdictions, these liens may have the same first-priority status as tax liens and may, therefore, be relatively easily recovered, but in other jurisdictions these liens may have lower priority. In one jurisdiction, were told that code enforcement liens were wiped out when the foreclosure was completed. A case study of Chicago estimated that between 2003 and 2004 the city recovered only about 40 cents on each dollar it spent for demolition. we Abandoned foreclosures also burden local governments with reduced property tax revenues. Local jurisdictions directly lose tax revenue from vacant and abandoned properties in two ways: (1) property taxes owed the property owner sometimes go unpaid and are not recouped, (2) a loss of tax value of a property when a structure is demolished. In addition, abandoned foreclosures contribute to falling housing values, wh erode the property tax base. For example, researchers calculated tha 2006, the City of Cleveland lost over $6.5 million due to the tax delinquency on vacant and abandoned structures, and over $409,000 demolished. Moreover, one local official told us because structures were that every 1 percent decline in home values costs the City of Cleveland $1 million in tax revenue. Abandoned foreclosures also contribute to an increased demand for cit services. As discussed, abandoned foreclosures result in an increased demand for code enforcement related services—including demolition, boarding of windows, removing trash, mowing the lawn, and a range of other activities intended to keep the unit from becoming an eyesore. Abandoned foreclosures also result in a variety of other muni including increased policing and firefighting, building inspections, legal fees, and increased demand for city social service programs. Abandoned foreclosures also increase the difficulty of transferring the property to another owner, which can increase the potential for the property to contribute to problems within a community. If a borrower remains in the home or in contact with the servicer, title to the property u of can be transferred to a new owner through short sales or deed-in-lie foreclosure actions. If homeowners vacate their properties and cann reached, these alternative means of transferring title cannot occur. However, in these cases, the servicer can complete the foreclosure process where title is transferred to a new owner—either a third par buyer or the lien holder where the property is then held in its or the servicer’s real estate-owned inventory. However, when the servicer abandons the foreclosures, this transfer of title does not occur. Without this transfer, abandoned foreclosures may remain vacant for extended periods of time, with recent media and academic reports labeling such properties as being in “legal limbo” or having a “toxic title.” One academic we interviewed said abandoned foreclosures result in property titles that lack transparency and cannot be easily transferred; another academic told ty us that uncertainty about a property’s ownership and status may make it hard for neighborhood groups or cities to determine what actions can be taken to dispose or sell such property. According to a recent report by a national rating agency, most properties associated with charged-off loans will ultimately be claimed by municipalities for back taxes, which according to stakeholders may not be an efficient process. Abandoned foreclosures can also create confusion among the bor over the status of their properties and their responsibilities for such properties. According to representatives of counseling agencies, community groups, and some of the homeowners we interviewed, borrowers are often surprised to learn that the servicer did not complete the foreclosure and take title to the house—and that they still own the property and are responsible for such things as maintenance, taxes, an d code violations. A nonprofit law firm representative said that borrowers who thought that they had lost their homes through foreclosure were sometimes brought to housing court for code violations. For example, a court record from Buffalo City indicates that one individual appeared in court to address code violations 3 years after receiving a judgment of foreclosure. According to the record, after the judgment o there was no sale of the property. While in court, this individual claimed that she did not believe that she still owned the property. Although creating various negative impacts on neighborhoods and communities, abandoned foreclosures have not significantly affected state s and federal foreclosure-related programs because most of these programtry to prevent foreclosure and some only apply to properties still occupied by homeowners. In response to the surge in mortgage foreclosures t began in late 2006 and continues today, several states created task forces to address the crisis. According to a 2008 report by a national trade association, the main objectives of almost every task force created as of March 2008 was to get practical help directly to “at risk” homeown for example, creating consumer hotlines, and developing outreach and educational programs designed to encourage homeowners to get counseling. In addition, we spoke with a legislative analyst for a nation organization who told us that over the past 3 years state legislatures have enacted many laws focusing on such topics as payment assistance and loan programs, regulating foreclosure scam artists, ensuring homeow and tenants receive proper foreclosure notice, shortening or lengt the foreclosure process, and implementing mediation or counseling programs. The federal government has also implemented several foreclosure-related programs, most of which focus on foreclosure prevention and require that borrowers live in their homes. For example, the federal Home Affordable Modification Program (HAMP) is a program designed to help borrowers avoid foreclosure and stay in their homes by providin g incentives for servicers to perform loan modifications; however, HAMP requires as a pre-condition that borrowers currently live in thei homes. The term “abandoned” was originally defined as a property that had been foreclosed upon and was vacant for at least 90 days. HUD expanded the definition to include properties where (a) mortgage, tribal leasehold, or tax payments are at least 90 days delinquent, or (b) a code enforcement inspection has determined that the property is not habitable and the owner has taken no corrective actions within 90 days of notification of the deficiencies, or (c) the property is subject to a court-ordered receivership or nuisance abatement related to abandonment pursuant to state, local, or tribal law or otherwise meets a state definition of an abandoned home or residential property. Therefore, there is no longer a programmatic barrier preventing NSP grantees from acquiring abandoned foreclosures. On behalf of GAO, a national nonprofit organization e-mailed structured questions to 25 NSP grantees, including NSP 1 and NSP 2 grantees, and their subrecipients. Various servicer practices may be contributing to the number of abandoned foreclosures. These practices include initiating foreclosure without obtaining updated property valuations and obtaining valuations that did not always accurately reflect property or neighborhood conditions or other costs, such as delinquent taxes or code violation fines. By not always obtaining updated property valuations at foreclosure initiation, servicers appeared to increase the potential for abandoned foreclosures to occur. As described earlier, after a certain period of loan delinquency—usually around 90 days—has passed, officials from the six servicers that we interviewed representing about 60 percent of the nation’s home mortgages told us that they make a determination about whether to initiate foreclosure. Representatives of servicers told us they take into account various information about the property when deciding whether to initiate foreclosure and some servicers conduct an equity analysis on certain loans to determine if the expected proceeds from a sale will cover foreclosure costs. However, the valuations used in these analyses might be outdated at the time of foreclosure initiation and staff from four of the six servicers told us that they did not always obtain updated information on the value of the property at the time they conducted this analysis and initiated foreclosure. The representatives from one servicer told us that the company performs an equity analysis on loans in its own portfolio before foreclosure initiation. However, for loans serviced for Fannie Mae, Freddie Mac, or third-party investors, this servicer follows the applicable servicing agreement or guidance, which may not require such analyses or updated property valuations. Instead, the company initiates foreclosure automatically when one of these loans reaches a certain delinquency status. Only two of the six servicers we interviewed reported updating property valuations on all loans before initiating foreclosure. Even when servicers obtain updated property valuations, this information does not always reflect actual property or neighborhood conditions, which can also increase the likelihood of servicers commencing foreclosure but then abandoning it. Representatives of the six servicers we interviewed said that property inspections begin in the early stages of delinquency and continue on a regular basis, but that information collected during inspections—information relevant to the resale value of a property, such as vacancy status and property damage—is not used in developing property valuations. Most of the servicers we interviewed reported using automated valuation models (AVM) to estimate property values, not necessarily taking into consideration property-specific conditions. Furthermore, servicers we interviewed said they do not incorporate information on property and neighborhood conditions obtained from property inspections in their valuations. Simply using a BPO or AVM without consideration of up-to-date property or neighborhood conditions may result in abandoned foreclosures because the actual resale value and accurate expected proceeds from foreclosure sale may not be reflected in the valuation. Another servicer practice that appeared to increase the potential for an abandoned foreclosure was that servicers generally were not considering local conditions that can affect property values prior to initiating foreclosure. Our interviews with the six servicers indicated that they did not always adjust property valuations to take into consideration potential steep declines in value due to factors specific to neighborhoods or city blocks. Staff from most of the servicers we interviewed reported that in some areas a property that was occupied and well-maintained when foreclosure was initiated could become vacant and be vandalized and decline in value. Similarly, local government officials said that homes with resale value could be stripped of raw building materials during the foreclosure process, leaving them practically worthless. As previously discussed, representatives of community groups and local governments told us that properties are sometimes vandalized within 24 hours of becoming vacant. In Detroit, for example, according to officials, property values can be seriously impacted by vacancy due to vandalism and rapid decay of vacant properties. Data from one property maintenance company contracted to inspect and secure homes undergoing foreclosure indicated that 29 percent of the properties it oversaw nationwide had some property damage in the 6 months from January to June 2010. In Detroit, about 54 percent of its properties had incurred damage. In addition, not considering other costs, such as local taxes and potential for code violation fines, associated with a property before initiating foreclosure can increase the likelihood that a foreclosure would be abandoned. For example, local taxes owed or code violations and fines can add significant costs to the foreclosure process. Servicers told us that they may abandon foreclosures because of the amount of tax owed on the property. Tax liens are commonly placed on delinquent properties when borrowers are unable to pay property taxes. Unattended properties or those with damage can often accumulate local municipality code violation fines that can also decrease the net proceeds that the servicer will gain from completing a foreclosure. These fines vary widely, but in some cities fines may accrue while a property is in delinquency and foreclosure, and over time the assessed fines can exceed a property’s value. The unpaid taxes and code violation fees that may accumulate during foreclosure can encourage servicers to abandon the foreclosure because they serve to reduce the net proceeds that the servicer would realize in completing the foreclosure. In some cases, the circumstances that lead to servicers initiating but then abandoning a foreclosure appeared to be those that could not have been anticipated at the time the decision to initiate foreclosure was being made. For example, property inspections and valuations usually include only information about external conditions of properties, potentially leaving internal damage or conditions such as lead paint or contaminated drywall undetected. Addressing these internal problems could be costly. Unexpected fires or other natural disasters can also leave properties with very low values. If such damages are discovered or occur after foreclosure was initiated, servicers may decide that completing the foreclosure is not warranted. When servicers do not have updated or complete information about property and neighborhood values and conditions before initiating foreclosure, the likelihood that they will commence then abandon foreclosures increases. Representatives of servicers said that they did not always obtain updated valuations before initiating foreclosure because they did not think it was necessary or because they were not required to do so. Instead, they generally obtained more current information only after foreclosure initiation, such as when preparing for the foreclosure sale. In cases where this valuation indicates that the value of the property was insufficient to justify completing the foreclosure process, the servicers generally stop the foreclosure and charge off the loan in lieu of foreclosure. However, by that time the property may already be vacant and negatively impacting the neighborhood. As previously discussed, our servicer data indicates that charge-offs in lieu of foreclosure that occurred after foreclosure was initiated were associated with a higher rate of vacancy than when the charge-off occurred prior to foreclosure initiation. Academics, local government officials, community groups, servicers, and others expressed mixed views on whether mortgage servicers have financial incentives to initiate foreclosure even in cases in which they were unlikely to complete the process. For example, accounting requirements for mortgage loans do not appear to provide incentives for servicers to initiate foreclosures but then not complete them. First, most mortgage loans that servicers are managing are being serviced on behalf of other owners. As a result, any accounting requirements applying to such loans do not financially affect the servicer’s financial statements because these loans are not their assets. However, servicers that service loans for other owners do carry the expected value of the servicing income they earn on such loans on their own financial statements as an asset. The reported value of this servicing rights asset would be reduced if a serviced loan is charged off and no more servicing income is expected from it. However, this reduction would occur regardless of whether foreclosure has been initiated or not. If the servicer of a mortgage loan is also the holder (owner) of the loan, accounting requirements also do not appear to provide incentive to initiate foreclosure. For the six servicers from whom we obtained data, 7 percent of the loans were owned by the servicing institution, meaning accounting decisions made by the servicer directly affect the institution financially. For these loans, bank regulatory rules require servicers to mark the value of delinquent loans down to their collateral value (or charge off the loan) after the loan is 180 days past due, regardless of whether foreclosure has been initiated or not. As a result, servicers then cannot avoid recognizing the loss by, for example, abandoning the foreclosure, because the loan’s loss of value is already reflected in their accounting statements. Furthermore, financial institutions holding loans in their own portfolio must develop reserve accounts for expected losses on their books. Thus, they have to anticipate any declines in property values for loans in their portfolio and start setting aside funds to cover any losses at specific points in the delinquency cycle. Whether the property is carried to foreclosure sale or charged off, the loss has already been reflected in their loan value accounts. For private label securitized loans that are being sold to private investors and serviced in pools, servicers do not appear to have incentives to delay or abandon foreclosure due to investors’ potential motivation to postpone accounting for losses on those securities. According to OCC officials, a single charge-off for a loan held in a pool would not necessarily lead to a devaluation or write-down of the value of the overall pool of loans. In addition, they said that whether the value of a security is written down depends on several factors, including overall losses to the pool, liquidity, and interest rate changes. Thus, investors have some discretion under accounting guidance in deciding when to write down securitized assets. Further, public accounting standards require investors holding mortgage- backed securities to either set aside loss reserves and write down the value of impaired assets. Therefore, abandoning or postponing foreclosure completion would be unlikely to have an advantage to the security. Some academics or local government officials we interviewed were concerned that servicers may have an incentive to initiate foreclosures even though they might later abandon the process in order to continue profiting from servicing mortgages. However, in servicers’ and experts’ descriptions of servicing practices, such incentives were called into question. Servicers can derive part of their revenue from imposing fees to borrowers who are past due with payments, and do not need to forward this revenue on to investors. Therefore, some stakeholders suggested servicers might initiate foreclosure in an effort to accrue late fees and other charges associated with servicing the loan during the foreclosure process. In addition, some stakeholders suggested that servicers might continue earning income from other financial interests they might own on the property, such as a second lien mortgage. However, five of the six servicers we interviewed reported that they stopped charging fees once a loan enters foreclosure as assessed fees are unlikely to be fully collected on loans in foreclosure. In addition, servicers might not continue yielding profits on second-lien mortgages because second-lien mortgages were much less prevalent on subprime first lien mortgages, which were often found in areas with very low housing values, such as Detroit and Cleveland, compared to high-price areas, such as California, according to a 2005 study. 59, 60 Finally, servicers and other experts told us that servicers do not have to initiate foreclosure in order to stop advancing payments on loans. Sean Dobson, Laurie Goodman, Mortgage Modifications: Where Do We Go From Here, Amherst Securities Group LP (July 2010). Charles A. Calhoun, The Hidden Risks of Piggyback Lending (Annandale, Va.: June 2005). government and private mortgage insurance and guarantees require that foreclosure be completed before claims are paid. For example, FHA mortgage insurance and VA guarantees, which cover a portion of potential losses from loan defaults, require a claimable event, such as a foreclosure sale, short sale, or deed-in-lieu of foreclosure before servicers can collect on a claim. Representatives of mortgage insurers also said that they could not pay an insurance claim on an abandoned foreclosure because the bank did not hold the title. Similarly, the GSEs may provide servicers incentives to complete foreclosures in order to receive reimbursements. Fannie Mae requires servicers to submit final requests for reimbursement of advances after the foreclosure sale and after any claims have been filed with other insurers or guarantors. Mortgage servicers’ foreclosure activities were not always a major focus of bank regulatory oversight, although federal banking regulators have recently increased their attention to this area, including the extent to which servicers were abandoning foreclosures. Various organizations have regulatory responsibility for most of the institutions that conduct mortgage servicing, but some of these institutions do not have a primary federal or state regulator. According to industry data, OCC—which oversees national banks—is the primary regulator for banks that service almost two-thirds of loans serviced by the top 50 servicers. The Federal Reserve oversees entities that were affiliated with bank holding companies or other state member banks that represented about 7 percent of these loans. Entities that are not chartered as or owned by a bank or bank holding company accounted for about 4 percent of the top 50 servicers’ volume. Some states require mortgage servicers (including state-chartered banks) to register with the state banking department, according to state banking supervisors we interviewed. These supervisors also told us that most banks that were chartered in their states did not service mortgage loans. According to our analysis, only a few of the top 50 servicers were state-chartered banks that were not members of the Federal Reserve System. According to our interviews with federal banking regulators, mortgage servicers’ practices, including whether they were abandoning foreclosures, have not been a major focus covered in their supervisory guidance in the past. The primary focus in these regulators’ guidance is on activities undertaken by the institutions they oversee that create the significant risk of financial loss for the institutions. Because a mortgage servicer is generally managing loans that are actually owned or held by other entities, the servicer is not exposed to losses if the loans become delinquent or if no foreclosure is completed. As a result, the extent to which servicers’ management of the foreclosure process is addressed in regulatory guidance and consumer protection laws has been limited and uneven. For example, guidance in the mortgage banking examination handbook that OCC examiners follow when conducting examinations of banks’ servicing activities notes that examiners should review the banks’ handling of investor-owned loans in foreclosure, including whether servicers have a sound rationale for not completing foreclosures in time or meeting investor guidelines. In contrast, the guidance included in the manual Federal Reserve examiners use to oversee bank holding companies only contained a few pages related to mortgage servicing activities, including directing examiners to review the income earned from the servicing fee for such operations, but did not otherwise address in detail foreclosure practices. In addition, until recently, the extent to which these regulators included mortgage servicing activities in their examinations of institutions was also limited. According to OCC and Federal Reserve staff, they conduct risk- based examinations that focus on areas of greatest risk to their institutions’ financial positions as well as some other areas of potential concern, such as consumer complaints. Because the risks from mortgage servicing generally did not indicate the need to conduct more detailed reviews of these operations, federal banking regulators had not regularly examined servicers’ foreclosure practices on a loan-level basis, including whether foreclosures are completed. For example, OCC officials told us their examinations of servicing activities were generally limited to reviews of income that banks earn from servicing loans for others and did not generally include reviewing foreclosure practices. Staff from the federal banking regulators noted that no federal or state laws or regulations require that banks complete the foreclosure process; therefore, banks are not prohibited from abandoning foreclosures. In addition, many of the federal laws related to mortgage banking, such as the TILA, are focused on protecting borrowers at mortgage origination, and Federal Reserve officials reported that none of the federal consumer protection laws specifically addressed the process for foreclosure. As a result, the Federal Reserve staff who conduct consumer compliance exams also have not focused on how servicers interact with borrowers during the default and foreclosure process. Further, OCC officials said that, even if examiners observed banks they supervised abandoning the foreclosure process, the practice would not negatively impact the bank’s overall rating for safety and soundness. These officials said that a bank’s need to protect its financial interest might override concerns about walking away from a home in foreclosure. However, in recognition of the ongoing mortgage crisis in the United States, staff from OCC and the Federal Reserve told us that their examiners have been focusing on reviewing servicers’ loan modification programs, including those servicers participating in the federal mortgage modification program, HAMP. As potential problems with foreclosure- related processes and documentation at major servicers emerged, these two regulators have also increased examination of servicer foreclosure practices. OCC staff responsible for examinations at one of the large national banks that conducts significant mortgage servicing activities told us that they had obtained data on loans that were charged-off and foreclosure was not pursued and found that the practice was very rare and typically involved loans on low-value properties. OCC examiners acknowledged that abandoned foreclosures—due to their association with neighborhood crime and blight—could pose a reputation and litigation risk to the bank. For example, we found that some servicers and lenders have been sued by various municipalities over their servicing or lending activities. The Federal Reserve has also recently increased its attention to mortgage servicing among the institutions over which it has oversight responsibility. In the past, the Federal Reserve has not generally included nonbank subsidiaries of bank holding companies that conduct mortgage servicing in their examination activity because the activities of these entities were not considered material risks to the bank holding company. However, in 2007, the Federal Reserve announced a targeted review of consumer compliance supervision at selected nonbank subsidiaries that service loans. Additionally, in October 2009, the Federal Reserve began a loan modification initiative, including on-site reviews, to assess whether certain servicers under its supervisory authority—including state member banks and nonbank subsidiaries of bank holding companies—were executing loan modification programs in compliance with relevant federal consumer protection laws and regulations, individual institution policies, and government program requirements. In addition, as part of its ongoing consumer compliance examination program, the Federal Reserve incorporated loan modification reviews into regularly scheduled examinations of state member banks, as appropriate. Federal Reserve officials noted that as of October 2010 these reviews and examinations were still in progress; however, initial work identified two institutions that were engaging in abandoned foreclosure practices. Federal Reserve officials reported that, in general, no federal regulation or law explicitly requires that servicers notify borrowers when they decide to stop pursuing a foreclosure after the foreclosure process had been initiated. Nevertheless, Federal Reserve staff instructed the servicers to do so as a prudent banking practice. According to Federal Reserve officials, the institutions agreed to do so. Because abandoned foreclosures do not necessarily violate any federal banking laws, supervisors did not take any actions against the institutions. Other federal and state regulators that review servicers’ activities also reported having little insight into servicers’ foreclosure practices and decisions to abandon foreclosures, particularly those with non-GSE loans, which account for the greatest numbers of abandoned foreclosures. Officials from the Securities and Exchange Commission, which receives reports on publicly traded residential mortgage-backed securities, told us that they did not examine servicers’ policies or activities for these securitized assets. Furthermore, SEC’s accounting review of publicly traded companies engaged in mortgage servicing included aggregate trends in foreclosure activity but not incomplete foreclosures on individual loans. While the Federal Housing Finance Agency Federal Property Manager’s Report includes data on charge-offs in lieu of foreclosure, FHFA also does not routinely examine whether Fannie Mae and Freddie Mac are abandoning foreclosures. Like the banking regulators, they also said they had focused most of their oversight on the institutions’ loan modification and pre-foreclosure efforts. In addition, the Federal Trade Commission (FTC) may also pursue enforcement actions against nonbank institutions that violate the FTC Act or consumer protection laws. However, FTC staff told us they did not think that either the unfair and deceptive acts and practices provision of the FTC Act or the Fair Debt Collections Practices Act would apply to an institution that walked away from a home in foreclosure, as a general matter. State banking regulators that we interviewed said that they conduct little oversight of servicers’ foreclosure practices given the limited number of state-chartered banks that conduct mortgage servicing activities. However, several examiners and industry association officials we interviewed acknowledged the need to obtain further information about the foreclosure process and improve their examination process for nonbank mortgage servicers. Other entities that review servicers’ activities also do not review servicers’ foreclosure practices or decisions to abandon foreclosures. Representatives from private rating agencies that evaluate mortgage servicers’ told us that although they review servicers’ handling of loans in default and the overall average length of time servicers take to complete foreclosure, they do not track specific loans to see if foreclosure was completed because it would not be a specific trigger for downgrading that security’s rating. In addition, representatives of institutions that serve as trustees for large numbers of pooled assets in an MBS pool told us that they sought to ensure that servicers forwarded payments to investors and noted that trustees did not provide management oversight of servicers’ decisions on how to handle loans. We identified various actions that some communities are taking to reduce the likelihood of abandoned foreclosures occurring or reduce the burden such properties create for local governments and communities. Communities dealing with abandoned foreclosures may benefit from implementing similar actions, but they may need to weigh the appropriateness of the various actions for their local circumstances as these actions can require additional funding, have unintended consequences, and may not be appropriate for all communities. In addition, these actions generally were designed to address vacant properties overall; therefore, they may not fully address the unique impacts of abandoned foreclosures. Officials from local governments, community groups, and academics told us that borrowers often leave their homes before the foreclosure sale even though they are entitled to stay in their homes at least until the sale. Although borrowers may leave for a variety of reasons, we consistently heard that many borrowers leave because they believe that servicers’ initial notices of delinquency and foreclosure initiation mean that they must immediately leave the property. For example, a representative of a counseling group in Chicago told us that many people, especially the elderly and non-native English speakers, do not understand notices that they receive from servicers and think that they are being told to leave their homes. Some jurisdictions are taking steps to increase borrowers’ awareness of their rights during foreclosure through counseling. A variety of counseling and mediation resources are already available to borrowers. For example, HUD sponsors housing counseling agencies throughout the country to provide free foreclosure prevention assistance and provides references to foreclosure avoidance counselors. In addition, according to a national research group, at least 25 foreclosure mediation programs were in operation in 14 states across the country as of mid-2009 to encourage borrowers and servicers to work together to keep people in their homes and avoid foreclosure. Officials from local governments and community groups, servicers, and an academic noted that increasing the use, visibility, and resources of counseling efforts could provide avenues to educate borrowers about their rights to remain in their homes during the foreclosure process and prevent vacancies. To increase the visibility and use of counseling resources, the state of Ohio implemented a hotline phone number to help refer borrowers to counselors and a Web site to provide information about foreclosure. In addition, local officials have credited a recent law in Michigan with helping to educate borrowers about their rights during the foreclosure process. The Michigan law allows borrowers a 90-day delay in the initiation of foreclosure proceedings if they request a meeting with a housing counselor and a servicer representative to try to arrange for a loan modification. Representatives of community groups, local governments, and servicers were generally supportive of efforts to educate borrowers about their rights during foreclosure, and a recent study has demonstrated the effectiveness of such counseling on keeping people in their homes. In our interviews, representatives of a servicer and local government and a researcher noted that counseling could be more effective at educating borrowers about their rights than servicers’ efforts because borrowers might be more willing to talk to a counselor than to a bank representative. Representatives of a law firm also noted that local staff might reach more borrowers and achieve better results than bank representatives because the local individuals have a better understanding of local conditions and homeowners could work with the same individual rather than with bank representatives who change with each contact. Community group and servicer representatives also noted that counseling is most effective at keeping people in their homes if it is offered soon after a borrower first becomes delinquent because they are more likely to leave their homes later in the foreclosure process. In addition, a November 2009 study found that homeowners who received counseling were about 1.6 times more likely to get out of foreclosure and avoid a foreclosure sale—possibly allowing them to remain in their homes—with counseling than without. Local community representatives noted that increased counseling may not completely prevent abandoned foreclosures for several reasons. First, counselors cannot reach every borrower needing assistance. For example, officials from a community group and counseling agencies said that some borrowers might not be aware that counseling is available or might be too embarrassed about their situation to seek assistance. Second, the quality of counseling may limit its effectiveness. Researchers noted that the quality of counseling can be uneven and organizations that are not HUD- approved or foreclosure rescue scams could mislead borrowers about their rights. Third, representatives of research and advocacy groups we interviewed also noted that increased funding for counseling efforts would allow counseling agencies to expand and help more homeowners. Another action that some local governments are taking to address the problems of vacant properties, including abandoned foreclosures, is to require servicers to register vacant properties. As previously discussed, one of the major challenges confronting code enforcement officials is identifying those responsible for maintaining vacant properties. Vacant property registration systems can attempt to address this problem by requiring servicers to provide the city with specific contact information for each vacant property they service. According to a national firm that contracts with servicers to maintain properties, nearly 288 jurisdictions have enacted vacant property registration ordinances as of February 2010. Although the structures of these ordinances vary, researchers generally classify them into two types. The first type of systems tracks all vacant and abandoned properties and their owners. For example, among the cities we studied, Baltimore, Maryland has implemented this type of registration system. The second type of systems attempts to hold the lender and servicer responsible for maintenance of vacant properties during the foreclosure process. According to the Fannie Mae and Freddie Mac uniform mortgage documents, although these mortgage contracts typically give servicers the right to secure abandoned properties and make repairs to protect property values, they do not necessarily obligate them to do so. The cities of Chula Vista, California, Cape Coral and Fort Myers, Florida, and Chicago, Illinois, for example, have implemented this second type of ordinance. New York state also enacted a similar law statewide. According to some local officials and researchers, the contact information in vacant property registration systems makes it easier for local code enforcement officials to identify the parties responsible for abandoned foreclosures and that holding mortgage owners accountable for vacant properties can reduce the negative impact of these properties on the community. For example, local officials we interviewed in some cities with vacant property registries said that most owners complied with their city’s registry requirements and noted that the registries had been effective at providing contacts for officials to call to resolve code violations on vacant properties. Several stakeholders, including local officials, researchers, and representatives of a community group also recommended the type of vacant property ordinance that holds servicers accountable for maintaining vacant properties during foreclosure. They noted that these types of ordinances could provide servicers with needed incentives to keep up vacant properties to avoid incurring additional costs and help them in determining whether to initiate foreclosure. Local officials and industry representatives told us that, while vacant property registration systems can help local governments identify some owners, they might not capture all owners, and some servicers found certain requirements overly onerous and outside of their legal authority to perform. Local officials in a couple of cities and one servicer representative told us that these systems might not capture all owners because those who did not want the responsibility of maintaining certain properties would choose not to register. Further, systems that do not require that properties be registered until after the foreclosure sale would not help officials identify those responsible for maintaining abandoned foreclosures. In addition, servicers’ representatives told us that complying with these ordinances can be burdensome. For example, servicers consider ordinances that require them to secure doors and windows with steel, install security systems, and perform capital improvements to vacant properties as onerous, according to an industry association. Servicers also reported having difficulty tracking and complying with multiple systems and said that they would prefer a uniform system with consistent requirements. Further, servicers and other industry representatives we spoke to believe servicers’ authority to perform work on properties they did not yet own as limited. Holding a mortgage on a property does not give the servicer right of possession or control over the property. Therefore, servicers argue that they cannot be held liable for conducting work on properties because they are not the titleholders until after a foreclosure sale. For example, representatives of one servicer told us that the company would take steps to prevent a property from deteriorating but was cautious about going onto a property it did not own. In addition, community groups, researchers, and other industry analysts have expressed concerns that such laws could have the unintended consequence of encouraging servicers to walk away from properties before initiating foreclosure to avoid the potential maintenance and related costs, which could have the same negative effects on neighborhoods and communities as abandoned foreclosures do now. State or local actions to streamline the foreclosure process for vacant properties could also reduce the number of abandoned foreclosures by decreasing servicers’ foreclosure costs and preserving the value of vacant properties. As we have seen, the length of the foreclosure process affects servicers’ foreclosure costs as well as the condition and value of a property. Some areas are implementing streamlining efforts. For example, a law was recently enacted in Colorado allowing servicers to choose a shortened statutory foreclosure process for vacant properties that provides for a foreclosure sale to be scheduled in half the time of the typical process, according to a state press release on the new law. In addition, some courts in Florida have created expedited foreclosure court dockets for uncontested cases in order to move a higher number of cases forward in the process. Shortening the time it takes to complete foreclosure could result in less decrepit properties that servicers could resell more easily and at a higher price than they might have been able to otherwise—thereby encouraging servicers to abandon fewer foreclosures. However, some stakeholders raised concerns about streamlined actions. First, servicers and other industry analysts note that determining whether properties were actually vacant could be difficult. Second, shortening foreclosure times is contrary to the trend among state and local governments across the country to enact laws such as foreclosure moratoriums that extend foreclosure timelines. Therefore, some raised concerns about ensuring that homeowners had appropriate opportunities to work out a solution within a shortened time frame. Third, another potential unintended consequence is that in judicial states, shortening the time frame for foreclosing on vacant properties by moving these cases to the head of the queue could lengthen the time frames for other cases, increasing servicers’ carrying costs on those properties. Other jurisdictions have attempted to require servicers to complete foreclosures once they have initiated them. For example, staff in one court we visited told us the judge requires a foreclosure sale to be scheduled within 30 days after the court enters a foreclosure judgment. If servicers do not comply, they can be held in contempt of court, fined, and perhaps serve jail time. Many local officials and researchers we interviewed suggested that foreclosure cases should be dismissed, that servicers should face fines, or that servicers should lose their right to foreclose or take other actions on a property if they do not take action on foreclosure proceedings or schedule a sale within a certain amount of time. These actions could reduce abandoned foreclosures because servicers would more thoroughly consider the benefits and costs of foreclosure before initiating the process, and once initiated, foreclosures would be completed in a timely manner. Others also said that these actions would quickly move properties out of the foreclosure process and into the custody of a servicer that local officials could then hold responsible for the property’s upkeep. However, others noted that such a requirement could result in missed opportunities to work out solutions with the borrower and that it could be difficult to enforce. For example, representatives of servicers and others told us that borrowers often sought such alternatives at the last minute before a foreclosure sale and that requiring servicers to complete all foreclosures would limit their ability to explore alternatives late in the foreclosure process. An academic and regulatory officials expressed concerns that servicers would incur additional expenses if they had to complete sales and take ownership of properties when doing so was not in their best interest and they would not be able to recover their costs. In addition, regulatory staff cautioned that such a requirement could cause servicers to walk away from properties before initiating foreclosure. This type of action also would be difficult to implement in a state with a statutory foreclosure process because there is not the same degree of public records tracking foreclosures in these states. Local actions to establish reliable outlets for servicers to easily and cheaply dispose of low-value properties could reduce the number of abandoned foreclosures by providing incentives for servicers to complete the process. As previously discussed, servicers told us that many properties that were abandoned foreclosures were those that would likely have been either too costly for servicers to take ownership of or not likely to have resulted in sufficient sale proceeds. Taking foreclosed properties into their own real estate ownership inventories can be costly to servicers as they must continue to pay for taxes and insurance, maintain a deteriorating property, and hire a broker to market the property for sale. According to a recent report, if servicers and their investors know that they will not be further burdened by costs for the property, they may be more willing to take title and transfer it to a government or nonprofit entity that will be able to begin moving the property back into productive use. The use of land banks is one alternative that some jurisdictions are attempting to use to address problems arising from large numbers of foreclosures and vacant properties. Land banks are typically governmental or quasi-public entities that can acquire vacant, abandoned, and tax-delinquent properties and convert them to productive uses, hold them in reserve for long-term strategic public purposes such as creating affordable housing, parks, or green spaces, or demolish them. Land banks can reduce the incidence of abandoned foreclosures by providing servicers a way to dispose of low-value properties that they might otherwise abandon. Sales or donations to land banks could help servicers reduce their foreclosed property inventories. For example, Fannie Mae and the Cuyahoga County Land Reutilization Corporation have an agreement whereby on a periodic basis Fannie Mae sells pools of very low- value properties to the land bank for 1 dollar, plus a contribution toward the cost of demolition. This agreement allows Fannie Mae to reliably dispose of pools of properties in a recurring transaction at pre-defined terms. Land bank officials from Cuyahoga County noted that they are in the process of negotiating similar agreements with several large servicers. Once it has acquired the properties, a land bank can help stabilize neighborhoods, such as by reducing excess and blighted properties through demolition or transferring salvageable properties to local nonprofits for redevelopment. According recent research, the Genesee County Land Bank in Flint, Michigan, has been credited with acquiring thousands of abandoned properties, developing hundreds of units of affordable housing, and being the catalyst for increasing property values in the community by more than $100 million between 2002 and 2005 through its demolition program. Although land banks can help reduce abandoned foreclosures or their negative effects, our interviews revealed potential challenges of implementing these banks. First, many of the local government officials we interviewed noted that land banks did not have enough resources to manage a large volume of properties. Land banks may be dependent on local governments for funding, and without a dedicated funding source it may be difficult for land banks to engage in long-term strategic planning. However, recently created land banks, such as those in Genesee and Cuyahoga counties, have developed innovative funding mechanisms that do not depend on appropriations from local governments. Second, some mentioned that contributions from servicers—such as the agreement between Fannie Mae and the Cuyahoga County Land Reutilization Corporation—could help defray land banks’ property carrying costs. Second, land banks may be limited in their authority to acquire or dispose of properties. For example, by design land banks tend to passively acquire and convert abandoned properties with tax delinquencies into new productive use. However, land banks can also be designed to actively and strategically acquire properties from multiple sources. The Cuyahoga County Land Reutilization Corporation, for example, has the authority to strategically acquire properties from banks, GSEs, federal or state agencies, and tax foreclosures. Third, some municipalities face political challenges in establishing land banks or local officials question whether they are needed. For example, according to an advisor to local governments on establishing land banks and a representative of a community group, the Maryland state legislature authorized the creation of a land bank in Baltimore, but its implementation fell through due to political differences at the city level. Further, some local officials we interviewed in Florida did not think land banks were needed in their areas because they expected the housing market to recover so that vacancies would not be a long-term problem. Similar to land banks, other methods for cities to acquire properties before or following foreclosure could also provide incentives to servicers to complete the foreclosure process for low-value properties rather than abandoning it. Some cities have negotiated specialized sale transactions with Fannie Mae and HUD. For example, HUD recently announced a partnership with the National Community Stabilization Trust (NCST) and leading financial institutions that account for more than 75 percent of foreclosed property inventory to provide selected state and local governments and nonprofit organizations the first opportunity to purchase vacant properties quickly, at a discount, and before they are offered on the open market. In addition, some cities have worked with Fannie Mae to purchase foreclosed and low-value properties. According to Fannie Mae, the City of St. Paul, Minnesota, has purchased 45 properties from the entity and has access to review Fannie Mae’s available properties to be able to submit an offer for a pool of properties before they are marketed. And, according to a representative of a national community development organization, with the broadened definitions of abandoned and foreclosed properties under the NSP program, local governments and other grantees will be able to work with servicers earlier in the foreclosure process to acquire such properties through short-sales, for example, which could discourage abandoned foreclosures. For example, one organization in Oregon is pursuing purchasing notes prior to foreclosure using some of the state’s Hardest Hit Fund money, which would save the servicer the costs of initiating and completing foreclosure. However, the ability of these types of programs to fully address the issue of abandoned foreclosures may be limited. For example, local officials and researchers said cities’ capacity to receive donations or acquire properties was limited because they did not have enough resources to manage properties. According to recent research, capacity constraints prevent most community development organizations from redeveloping enough vacant homes to reverse the decline of neighborhood home values. In addition, according to industry observers, and HUD and local government officials, local governments have not pursued many pre-foreclosure acquisitions, such as short sales and note sales, because they can be time-consuming and technically difficult to complete. The overall estimated number of abandoned foreclosures nationwide is very small. However, the communities in which they are concentrated often experience significant negative impacts, including producing vacant homes that can be vandalized, reduce surrounding neighborhood property values, and burden local government with the costs associated with maintenance, rehabilitation, or demolition. Given the large number of homeowners experiencing problems in paying their mortgages and the negative impacts on communities when properties become vacant, avoiding additional abandoned foreclosures would help reduce any further potential problems that another vacant and uncared for property can create for communities already struggling with the impacts of the current mortgage crisis. Various servicer practices appear to be contributing to the potential for additional abandoned foreclosures. First, no requirement currently exists for mortgage servicers to notify borrowers facing foreclosure of their right to continue to occupy their properties during this process or of their responsibilities to pay taxes and maintain their properties until any sale or other title transfer activity occurs, and regulatory officials told us that they were not sure they had the authority to require servicers to do so. The lack of awareness among borrowers about their rights and responsibilities contributes to the problems associated with abandoned foreclosures. With such information, more borrowers might not abandon their homes, reducing the problems that vacancies create for neighborhoods, their surrounding communities, and the local government of the community in which the property exists. Second, no requirement exists for servicers to notify the affected local government if they abandon a foreclosure. Without such notices, local government officials often are unaware of properties that are now at greater risk of damage and create potential problems for the surrounding neighborhood. With such information, local governments could move more quickly to identify actions that could ensure that such properties are moved to more productive uses. Third, servicers are not always obtaining updated property value information that consider local conditions that can affect property values when initiating foreclosure. As a result, the likelihood that servicers may initiate foreclosure only to later abandon it after learning that the likely proceeds from the sale of the property would not cover their costs is increased. If servicers had more complete and accurate information on lower-value properties that were more at risk for such declines in value, they may determine that foreclosure is not warranted prior to initiating the process for some properties. Having servicers improve the information they use before initiating a foreclosure could result in fewer vacant properties that cause problems for communities. To help homeowners, neighborhoods, and communities address the negative effects of abandoned foreclosures, we recommend that the Comptroller of the Currency and the Chairman of the Board of Governors of the Federal Reserve System take the following four actions: require that the mortgage servicers they oversee notify borrowers when they decide to charge off loans in lieu of foreclosure and inform borrowers about their rights to occupy their properties until a sale or other title transfer action occurs, responsibilities to maintain their properties, and their continuing obligation to pay their debt and taxes owed; require that the mortgage servicers they oversee notify local authorities, such as tax authorities, courts, or code enforcement departments, when they decide to charge off a loan in lieu of foreclosure; and require that the mortgage servicers they oversee obtain updated property valuations in advance of initiating foreclosure in areas associated with high concentrations of abandoned foreclosures. As part of taking these actions, the Comptroller of the Currency and the Chairman of the Board of Governors of the Federal Reserve System should determine whether any additional authority is necessary and, if so, work with Congress to ensure they have the authority needed to carry out these actions. We requested comments on a draft of this report from the Board of Governors of the Federal Reserve System, Department of Housing and Urban Development, Department of the Treasury, Department of Veterans Affairs, Fannie Mae, Federal Deposit Insurance Corporation, Federal Housing Finance Agency, Freddie Mac, Federal Trade Commission, Office of Comptroller of Currency, Office of Thrift Supervision, and Securities and Exchange Commission. We received technical comments from Federal Reserve, FDIC, FHFA, FTC, OCC, and OTS, which we incorporated where appropriate. The Comptroller of the Currency did not comment on the recommendations addressed to him. We also received written comments from Treasury and the Federal Reserve that are presented in appendices II and III. The Acting Assistant Secretary for Financial Stability at the Department of the Treasury noted that, although the number is small, abandoned foreclosures are a serious problem that underscores the importance of holding servicers accountable. The Director of the Division of Consumer and Community Affairs at the Board of Governors of the Federal Reserve System agreed with our findings but neither agreed nor disagreed with our recommendations. Instead, the Director’s letter described ongoing actions the Federal Reserve is taking to address these issues and noted that the agency is concerned about the effects abandoned foreclosures may have in communities where they are concentrated. In response to our recommendation that the agency require the servicers the Federal Reserve oversees to notify borrowers that their loans are being charged off in lieu of foreclosure, the Director’s letter states they agreed that such notification represents a responsible and prudent business practice and will advise institutions they supervise to notify affected borrowers in the event of abandoned foreclosures. While this would ensure that borrowers are notified in cases where examiners identify instances of abandoned foreclosures, we believe that a more affirmative action by the Federal Reserve to communicate this expectation to all servicers it supervises would be more effective at reducing the impact of abandoned foreclosures on homeowners. Regarding our recommendation that the Federal Reserve require mortgage servicers to notify local authorities when loans are being charged off in lieu of foreclosure, the Consumer and Community Affairs Division Director stated that the Federal Reserve expects servicers to comply with any local laws requiring registration of vacant properties. While this would ensure that local authorities are notified in those communities, we reiterate that the Federal Reserve should take steps to ensure that the servicers it oversees are notifying local authorities that would likely be in a position to take action to mitigate the impact of an abandoned property, such as tax authorities or code enforcement departments, in all areas—not just those with existing vacant property registration systems—to ensure that all communities have such information that could help them better address the potential negative effects of abandoned foreclosures. We also encourage the Federal Reserve, along with other banking regulators with responsibilities to oversee mortgage servicers, to work with Congress to seek any additional authority needed to implement such a requirement. In response to our recommendation that the Federal Reserve require servicers to obtain updated property valuations in advance of initiating foreclosure in certain areas, the Consumer and Community Affairs Division Director letter notes they agree with the importance of servicers having the most up-to-date information before taking such actions, but noted that servicers’ ability to obtain optimal information could be limited. Even without the ability to conduct interior inspections of properties, having servicers take additional steps to improve the accuracy of their valuations prior to initiating foreclosure would still be possible. We acknowledge that updating property valuations can be challenging, which is why our recommendation encourages a risk-based approach to identifying properties where an updated evaluation could assist servicers in making a more well-informed decision about initiating foreclosure. The Director’s letter also cites existing Federal Reserve guidance outlining expectations for obtaining property valuations, which, according to Federal Reserve staff, applies to actions that institutions should take before and after they have acquired properties through foreclosure. According to this guidance, an individual who has appropriate real estate expertise and market knowledge should determine whether an existing property valuation is valid or whether a new valuation should be obtained because of local or property-specific factors including the volatility of the local market, lack of maintenance on the property, or the passage of time, among others. Having the Federal Reserve take further steps to ensure that servicers understand and implement this guidance and evaluate properties in advance of initiating foreclosure would likely help to reduce the prevalence of abandoned foreclosures as well. We are sending copies of this report to interested congressional committees, the Board of Governors of the Federal Reserve System, Federal Deposit Insurance Corporation, Federal Housing Finance Agency, Office of Controller of Currency, Office of Thrift Supervision, Department of Housing and Urban Development, Department of the Treasury, Department of Veterans Affairs, and Securities and Exchange Commission, and other interested parties. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. This report focuses on the prevalence, causes, and effects of abandoned foreclosures. Specifically, this report addresses (1) the nature and prevalence of abandoned foreclosures, including how they occur; (2) the impact of abandoned foreclosures on communities and state and federal efforts to mitigate the effects of foreclosure; (3) certain practices that may contribute to why mortgage servicers initiate but not complete foreclosures and the extent of federal regulatory oversight of mortgage foreclosure practices; and (4) the various actions some communities are taking to reduce abandoned foreclosures and their impacts. To determine the nature and prevalence of abandoned foreclosures— where servicers initiated but decided not to complete foreclosure and the property is vacant—we analyzed mortgage loan data from January 2008 to March 2010 reported to us from selected servicers and two government- sponsored enterprises (GSE). We obtained aggregated and loan-level data from six servicers—including large servicers and those that specialize in servicing nonprime loans—Fannie Mae, and Freddie Mac on loans that were categorized as charge-offs in lieu of foreclosure (loans that were fully charged off instead of initiating or completing a foreclosure). After eliminating overlapping loans, the institutions contributing data to our sample collectively account for nearly 80 percent of all first-lien mortgages outstanding. The database we have assembled is unique and, therefore, difficult to cross-check with other known sources to check its reliability. Because we were able to cross-check the loan level information provided by the GSEs with official reports submitted by Federal Housing Finance Agency (FHFA) to Congress we believe that these data are sufficiently reliable for our reporting purposes. However, because some of the servicers compiled the information requested differently or were reporting information that is not a part of their normal data collection and retention apparatus, our dataset contains various degrees of inconsistency, missing data and other issues. In reviewing these data we found a number of concerns with some elements of the database and some sources of the data. For example, we believe that some servicers (1) submitted data that included second liens, (2) contained elements that appeared to be irregular or (3) may not have provided the total charge-offs in lieu of foreclosure associated with their servicing portfolio. While the number of potential second liens were not significant especially among those that we identified as abandoned foreclosures, it is difficult to know with certainty how the remaining issues impacted our results including the descriptive statistics report. For this reason, we have characterized our results in a manner that minimizes the reliability concerns and emphasizes the uncertainty regarding the total number of abandoned foreclosures in the United States. Moreover, we conducted a variety of tests on this data. For example, we were able to use GSE data as a reliability check on some elements of the servicer database. We also cross-checked some of the properties in our database against property tax records for a portion of the data for Baltimore and Chicago. We were able to visually inspect some properties in a few cities. Given these and other steps we have taken, we believe the data is sufficiently reliable for the purposes used in this study. We used two methods to code the data as vacant or occupied in our database. First, the servicers provided data on whether the property was vacant at the time the loan was charged off in lieu of foreclosure. We found this data to be reliable based on cross-checks with property tax records and visual inspection for a small sample of the database. However, 32 percent of the field was either blank or the servicer indicated that occupancy status was unknown. Moreover, an occupied property may eventually become vacant weeks or even months after charge-off in lieu of foreclosure. Therefore, we augmented this information by using a second method. The second method involved determining occupancy status using U.S. Postal Service (USPS) administrative data on address vacancies. These data represent the universe of all vacant addresses in the United States. We obtained lists of vacant properties from USPS for 6-month increments from June 30, 2008, through June 30, 2010. The USPS codes a property as vacant if there has been no mail delivery for 90 days. The data also included properties the USPS codes as a “no-stat” for urban areas. A property is considered a “no-stat” if it is under construction, demolished, blighted and otherwise identified by a carrier as not likely to become active for some time. We matched these USPS data on address vacancies to actual addresses in our loan database. Therefore, we considered a property vacant if it was either coded as vacant at the time of charge-off in lieu of foreclosure by the servicers or was coded as vacant based on the vacancy lists obtained from USPS. Users of the report should note the difficulty in determining vacancy and that our exercise may have resulted an understatement or overstatement of the number of vacant properties in our sample. In particular, determining vacancy by matching to USPS data has limitations including, (1) long lags before vacancy is determined, (2) mail carrier delays in reporting vacancies, (3) coding seasonal and recreational properties as vacant, and (4) matching errors due to differences in address formats or incomplete addresses in the loan file. Due to privacy concerns we were not able to leverage USPS expertise to ensure a higher quality match based on lists that included all known delivery points. As a result, our analysis will miss any property that was demolished upon the determination of vacancy or any property deemed a “no-stat” in rural areas. Because of the 90-day lag in determining vacancy and the fact that we are dealing with properties from 2008 to 2010 largely in major metropolitan areas, this is not likely to have a significant impact on our estimates of vacant properties. It should be noted that the data collected by the USPS are designed to facilitate the delivery of mail rather than make definitive determinations about occupancy status. For example, USPS residential vacancy data do not differentiate between homeowner and rental units nor identify seasonal or recreational units. Once vacancy is determined and the number of abandoned foreclosures is estimated, our projections of the prevalence of abandoned foreclosures in the United States are based on an extrapolation designed to highlight the uncertainty in the results. While we estimated the total number of abandoned foreclosures directly for a large portion of the mortgage market, we simulated the total number based on assumptions about the remaining mortgage loans not covered in our sample. To form estimates of prevalence we conducted several analyses. First, we formed base prevalence estimates using information from the servicer and GSE databases alone. Second, we combined servicer and GSE databases to produce some estimates of prevalence based on information contained in both databases. Third, we made a determination of the possible error rate in determining vacancy through various runs of our matching analysis to USPS data and examining the output. Lastly, we conducted a series of simulations to extrapolate our findings to the 20 percent of the mortgage market not covered in our database and to capture the uncertainty inherent in our data. Although the loans reflected in this report represent servicers that service a large percentage of the overall mortgage industry, they likely do not represent a statistically random sample of all charge-offs in lieu of foreclosure. Rather than assume the large sample can be generalized and produce a point estimate with confidence interval, we simulated the likely number of abandoned foreclosures for the remaining loans under a number of different assumptions about the characteristics of the population. For example, in some runs we assumed a 10 percent error matching rate and that the remaining servicers resemble some combination of the subprime specialty lenders and the large servicers in our sample. In some cases we assumed no error in our matching analysis but formed our estimates eliminating a servicer that raised some concern over the reliability of their data. Lastly, we produced estimates combining elements of both of these sets of assumptions. In extrapolating the findings from our sample we provided a range of estimates that reflect the fact that the characteristics of these loans may differ from the remaining population of mortgages as well as our concerns over data reliability and potential matching error in determining vacancy. We believe these simulations properly characterized the sources and nature of uncertainty in the results. We also acknowledged, throughout the report, cases in which data issues may have affected the results. To supplement this data analysis and to determine the impacts of incomplete foreclosures on communities and homeowners, we conducted case studies and a literature review. We selected 12 locations to provide a range of states with judicial and statutory foreclosure processes, from different regions of the country, and that had variations in local economic circumstances and responses to abandoned foreclosures. Our case study locations were Atlanta, Georgia; Baltimore, Maryland; Buffalo, New York; Chula Vista, California; Chicago, Illinois; Cleveland, Ohio; Detroit, Michigan; Lowell, Massachusetts; and Cape Coral, Fort Myers, Manatee County, and Hillsborough County, Florida. We conducted in person site visits or phone calls with city and county officials, community development organizations, academic researchers, foreclosure assistance providers, and state banking supervisors in these locations to gain perspectives on the impact and prevalence in each location. Although we selected the case study locations to provide broad representation of conditions geographically and by type of foreclosure process, these locations may not necessarily be representative of all localities nationwide. As a result, we could not generalize the results of our analysis to all states and localities. In two of the locations we visited, officials provided us with pictures and examples of abandoned foreclosures and vacant properties. In Detroit, Baltimore, and Florida, we visited selected vacant and abandoned properties and took pictures to document property conditions. After the conclusion of our fieldwork, we analyzed the information obtained during the interviews to find common themes and responses. To supplement our case study interviews, we reviewed various relevant journal articles, reports, law review articles, and other literature on the impacts of vacant and abandoned properties. We consulted with internal methodologists to ensure that any literature we used as support for our findings was methodologically sound. To determine what impacts abandoned foreclosures were having on state foreclosure mitigation efforts, we reviewed the findings and recommendations of several state foreclosure task forces and interviewed staff from a national policy research organization who tracks state foreclosure-related legislation. We also contacted the housing finance agencies in the 10 states that were determined as of March 2010 to have been hardest hit by the foreclosure crisis. These states received funding from the Department of the Treasury through its Housing Finance Agency Innovation Fund for the Hardest Hit Housing Markets (HFA Hardest-Hit Fund), and included Arizona, California, Florida, Michigan, Nevada, North Carolina, Ohio, Oregon, Rhode Island, and South Carolina. To determine what impacts abandoned foreclosures were having on federal foreclosure mitigation efforts, we reviewed current federal foreclosure efforts and obtained information from Neighborhood Stabilization Program (NSP) grantees. The current federal foreclosure efforts we reviewed include the Home Affordable Modification Program (HAMP), Federal Housing Administration HAMP, Veterans Affairs HAMP, Second Lien Modification Program, Home Affordable Refinance Program, Home Affordable Foreclosure Alternatives Program, Housing Finance Agency Innovation Fund for the Hardest-Hit Housing Markets, Hope for Homeowners, Hope Now, Mortgage Forgiveness Debt Relief Act and Debt Cancellation, and the Neighborhood Stabilization Program. In conjunction with a separate GAO review of the first phase of the Neighborhood Stabilization Program (NSP 1), we interviewed officials from 12 of the 309 NSP 1 grantees that were selected based on factors including the magnitude of the foreclosure problem in their area, geographic location, and progress made in implementing the program. The grantees were Orange County, Lee County, and City of Tampa (Florida); State of Nevada, Clark County, City of Las Vegas, City of North Las Vegas, and City of Henderson (Nevada); State of Indiana, City of Indianapolis, and City of Fort Wayne (Indiana); and City of Dayton (Ohio). Additionally, we worked with a national nonprofit organization to obtain written responses to structured questions on the extent to which abandoned foreclosures have impacted their efforts to acquire properties from an additional 25 NSP 1 and NSP 2 grantees and subrecipients from across the country. These grantees may not necessarily be representative of the all grantees. As a result, we could not generalize the results of our analysis to all NSP grantees. To identify the reasons financial institutions decide to not complete foreclosures, we interviewed six servicers, including some of the largest and those that specialize in subprime loans. These servicers represented 56 percent of all mortgages outstanding. We also analyzed Fannie Mae and Freddie Mac policies and procedures for servicers in handling foreclosures and compared them to other guidance servicers follow, such as pooling and servicing agreements (PSA). We did not do a systematic analysis of a sample of PSAs ourselves, rather we relied on interviews with servicers and academics who research PSAs, relevant literature, and reports to better understand how the terms of PSAs might influence servicers’ decisions to pursue or abandon foreclosure under different circumstances, and how losses associated with delinquency and foreclosure are accounted for. Thus, descriptions contained in this report are the opinions of these academics and authors only about those specific PSAs they provided to us or were discussed in their reports. While there may be things that are similar across PSAs, they are contracts between two parties—the trust and the servicer—and the terms apply to just these parties. We reviewed federal regulatory guidance that covers the examination process for reviewing institutions’ foreclosure and loss reserve process. We also reviewed whether abandoned foreclosures may violate consumer protection laws such as the Fair Debt Collections Practices Act, and the Federal Trade Commission Act (Unfair or Deceptive Acts or Practices). In addition, we interviewed representatives of Board of Governors of the Federal Reserve System, Federal Deposit Insurance Corporation, Office of Controller of Currency, Office of Thrift Supervision, Department of Housing and Urban Development, Department of Veterans Affairs, and Securities and Exchange Commission. To determine what actions have been taken or proposals offered to address abandoned foreclosures, we reviewed academic literature and interviewed academics, representatives of nonprofit organizations, local, state, and federal officials, and other industry participants. We also obtained information about the advantages and disadvantages of these actions through our literature review and interviews. We summarized these potential actions and conducted a content analysis of interviewee viewpoints on their advantages and disadvantages. We conducted this performance audit from December 2009 through November 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As part of our assessment of how abandoned foreclosures (properties on which a foreclosure has been initiated but not completed and are vacant) might affect federal foreclosure-related programs, we reviewed several current programs and their eligibility requirements. Most programs listed below were designed to help homeowners avoid foreclosure and require that those who receive assistance be owner-occupants of their homes. The following information appears as interactive content in the body of the report when viewed electronically. The content associated with various states on the map describes housing market conditions that likely explain the elevated levels of abandoned foreclosures in three different groups of states. The content appears in print form below. This categorization is based in part on judgment and trends in the data for the MSAs with the most abandoned foreclosures in these states. Because other researchers may posit alternative categorizations that may also fit the data and other types of abandoned foreclosure exist, this analysis should not be considered definitive. In addition to the contact named above, Cody Goebel (Assistant Director); Emily Chalmers; William R. Chatlos; Kate Bittinger Eikel; Lawrance Evans, Jr.; Simon Galed; Jeff R. Jensen; Matthew McHale; Courtney LaFountain; Tim Mooney; Marc Molino; Jill Naamane; Rhiannon Patterson; Linda Rego; Jeff Tessin; and Jim Vitarello made key contributions to this report. | Entities responsible for managing home mortgage loans--called servicers--may initiate foreclosure proceedings on certain delinquent loans but then decide to not complete the process. Many of these properties are vacant. These abandoned foreclosure--or "bank walkaway"--properties can exacerbate neighborhood decline and complicate federal stabilization efforts. GAO was asked to assess (1) the nature and prevalence of abandoned foreclosures, (2) their impact on communities, (3) practices that may lead servicers to initiate but not complete foreclosures and regulatory oversight of foreclosure practices, and (4) actions some communities have taken to reduce abandoned foreclosures and their impacts. GAO analyzed servicer loan data from January 2008 through March 2010 and conducted case studies in 12 cities. GAO also interviewed representatives of federal agencies, state and local officials, nonprofit organizations, and six servicers, among others, and reviewed federal banking regulations and exam guidance. Among other things, GAO recommends that the Federal Reserve and Office of the Comptroller of the Currency (OCC) require servicers they oversee to notify borrowers and communities when foreclosures are halted and to obtain updated valuations for selected properties before initiating foreclosure. The Federal Reserve neither agreed nor disagreed with these recommendations. OCC did not comment on the recommendations. Using data from large and subprime servicers and government-sponsored mortgage entities representing nearly 80 percent of mortgages, GAO estimated that abandoned foreclosures are rare--representing less than 1 percent of vacant homes between January 2008 and March 2010. GAO also found that, while abandoned foreclosures have occurred across the country, they tend to be concentrated in economically distressed areas. Twenty areas account for 61 percent of the estimated cases, with certain cities in Michigan, Ohio, and Florida experiencing the most. GAO also found that abandoned foreclosures most frequently involved loans to borrowers with lower quality credit--nonprime loans--and low-value properties in economically distressed areas. Although abandoned foreclosures occur infrequently, the areas in which they were concentrated are significantly affected. Vacant homes associated with abandoned foreclosures can contribute to increased crime and decreased neighborhood property values. Abandoned foreclosures also increase costs for local governments that must maintain or demolish vacant properties. Because servicers are not required to notify borrowers and communities when they decide to abandon a foreclosure, homeowners are sometimes unaware that they still own the home and are responsible for paying the debt and taxes and maintaining the property. Communities are also delayed in taking action to mitigate the effects of a vacant property. Servicers typically abandon a foreclosure when they determine that the cost to complete the foreclosure exceeds the anticipated proceeds from the property's sale. However, GAO found that most of the servicers interviewed were not always obtaining updated property valuations before initiating foreclosure. Fewer abandoned foreclosures would likely occur if servicers were required to obtain updated valuations for lower-value properties or those in areas that were more likely to experience large declines in value. Because they generally focus on the areas with greatest risk to the institutions they supervise, federal banking regulators had not generally examined servicers' foreclosure practices, such as whether foreclosures are completed; however, given the ongoing mortgage crisis, they have recently placed greater emphasis on these areas. GAO identified various actions that local governments or others are taking to reduce the likelihood or mitigate the impacts of abandoned foreclosures. For example, community groups indicated increased counseling could prevent some borrowers from vacating their homes too early. Some communities are requiring servicers to list properties that become vacant properties on a centralized registry as a way to identify properties that could require increased attention. In addition, by creating entities called land banks that can acquire properties from servicers that they otherwise cannot sell, some communities have provided increased incentives for services to complete instead of abandon foreclosures. However, these actions can require additional funding, have unintended consequences, such as potentially encouraging servicers to walk away from properties before initiating foreclosure, and may not be appropriate for all communities. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
GPO was established in 1861 to (1) assist Congress and federal agencies in the production and replication of information products and services, and (2) provide the public with government information products and services. GPO provides printing services to all three branches of government—either by producing work in-house or by procuring it from commercial printers. Information dissemination is accomplished through GPO’s Superintendent of Documents, who is to provide public access to government information through (1) the sale of publications; (2) distribution of publications to depository and international exchange libraries, to those recipients designated by law, and for agencies on a reimbursable basis; and (3) compilation of catalogs and indexes containing complete and authoritative descriptions of government publications. The public printing and documents chapters of title 44 of the U. S. Code require GPO to fulfill the printing needs of the federal government and distribute government publications to the public. GPO’s activities are financed through a revolving fund, which is reimbursed by payments from client agencies and sales of government publications, and transfers from the Congressional Printing and Binding Appropriation and the Salaries and Expenses Appropriation of the Superintendent of Documents. These annual appropriations are to reimburse GPO for costs incurred while performing congressional work and fulfilling statutory requirements associated with the distribution of government publications. Reimbursements from these appropriations to the revolving fund are recorded as revenues. The sales program operates within the revolving fund and is expected to recover its costs from its sales. According to GPO, the sales program does not receive any direct appropriation. GPO is headed by the Public Printer, who is nominated by the President and confirmed by the Senate. The sales program is led by the Superintendent of Documents, who reports directly to the Public Printer. The sales program provides the public the opportunity to purchase government publications and subscriptions at GPO’s 24 bookstores across the country; through telephone, fax, and mail orders; and through consigned sales agents at other agencies. Within the Superintendent of Documents’ staff is the Documents Sales Service, which includes staff in the Sales Management Division and Documents Control Branch. Other key players in the sales program include Customer Services, whose printing specialists serve as liaisons with the issuing agencies until the publications are produced, and the Office of the Comptroller, which keeps the financial records, including inventory. For more detail on GPO’s organizational makeup, see appendix II. Once a publication is printed and enters the sales program, the Documents Control Branch, within the Sales Management Division, maintains inventory control, determines its continued salability, and makes reprinting and disposal decisions. Working with the issuing agency, Sales Management Division staff establish a life cycle for each publication that represents the period during which sales demand is expected. According to GPO, the average life cycle for a publication in its inventory is now about 12 months. As of September 1996, the sales program carried 13,268 publications in its inventory, valued at about $12.8 million based on printing and binding costs. The sales program did not report a loss between fiscal years 1981 and 1995. For fiscal year 1996, however, the sales program’s expenses of $79.4 million exceeded revenues of $70.5 million, for a net loss of $8.9 million. As of June 1997, the sales program was showing a loss of about $537,000 for fiscal year 1997. In May 1996, financial projections indicated that the sales program expected a substantial loss for fiscal year 1996, the first such loss in 15 years. These projections were based on information that indicated that revenue was down and expenses were up for several reasons, including declining sales, the effect of the government shutdown at the beginning of fiscal year 1996, competition from other government sales programs, increasing use of free electronic publications, and a substantial increase in charges to surplus publications expense (i.e., the printing and binding costs of publications in GPO’s inventory that are expected to be unsalable). As a result of the projected loss for fiscal year 1996, the Superintendent of Documents tasked a management team with developing an action plan to increase revenue and reduce expenses, with the objective of returning the sales program to full cost-recovery in fiscal year 1997. The plan, dated September 1996, originally contained 44 individual projects and was later amended to include 2 more projects. One original project was a special effort, over and above GPO’s routine process for removing excess publications, to move aggressively to reduce the inventory of surplus publications before the new fiscal year began on October 1, 1996. Such a reduction would increase the surplus publications expense for fiscal year 1996 but was expected to decrease those expenses in fiscal year 1997 and subsequent years. In other words, the sales program’s losses for fiscal year 1996 would be greater, but GPO officials hoped that this would result in the program breaking even or better for fiscal year 1997 and beyond. The inventory reduction began in early September 1996, even before the action plan was issued, with a deadline for completion of September 30, 1996. The September 1996 inventory reduction involved 2,127 publications that had a printing and binding cost of about $3 million, which was about one-third of the surplus publications expense GPO charged for publications it excessed and disposed of in fiscal year 1996. (See appendix III for examples of the publications disposed of in September 1996.) GPO’s records and our discussions with GPO warehouse and contractor personnel indicate that the publications inventory that was excessed during the reduction was sold (for less than 3 cents per pound) to a scrap contractor, who was required by contractual terms to shred and recycle it rather than resell the individual publications. The Superintendent of Documents had issued policies and procedures for determining excess, obsolete, damaged, and destroyed information products and for managing inventory. Superintendent of Documents Policy No. 38, dated May 28, 1984, provides that publication inventories are to be reviewed quarterly to determine the quantities that are to be retained and those that are excess. This policy applies to inventories that are managed by headquarters staff. A separate procedure (Superintendent of Documents Policy No. 38.6) applies to inventories in GPO’s bookstores. Under the existing policies and procedures, inventory management specialists (IMS) in the Documents Control Branch are to review quarterly the amount of inventory for the publications they manage. This review is conducted to identify whether the inventory should be reduced based on the sales history and projected life cycle of the publication. As part of the Superintendent of Documents’ existing policy, which was issued by the current Public Printer when he was the Superintendent of Documents, once an IMS determines the number (if any) of copies of a publication that are excess, he or she is to call the issuing agency to determine whether it wants the extra copies. As part of the inventory review process for publications of high dollar value or with a large number of copies on hand, the IMS then is to complete Form 3880, which includes such information as the estimated printing and binding cost of the publication, anticipated sales, total copies sold, and whether the issuing agency wants any of the excess copies. (The form does not include the holding cost of retaining the copies in inventory.) This completed form is to be sent to a Documents Survey Board consisting of the Director of Documents Sales Service, the Chief of the Sales Management Division, and the Chief of the Documents Control Branch. If the Survey Board approves the form, the IMS then must prepare a notice to be sent to GPO’s warehouse in Laurel, Maryland. At the warehouse, the excessed stock (i.e., stock not wanted by the issuing agency) is to be identified and moved to a separate area for periodic pick up by a contractor, who is required by the contract to shred the documents and have them recycled. The contractor is not permitted to resell the documents other than for recycling. During the major reduction in September 1996, the Superintendent of Documents’ staff followed his orders and disregarded policy and normal procedures in order to reduce the inventory of excess publications before October 1, 1996. When the Superintendent of Documents realized that the sales program expected a substantial loss, he told his staff in a June 1996 memorandum that, while developing an action plan to increase revenue and reduce expenses, they should: “Ignore politics and external influences. Disregard current policies and practices that inhibit creativity and impede change.” According to the Superintendent of Documents, his instruction to ignore politics and external influences referred to frequent requests from issuing agencies to have more copies of publications in the sales inventory than GPO believes can be sold. The Superintendent further said that he subsequently verbally instructed his staff to begin the inventory reduction before the action plan was approved and told them to disregard policies that would interfere with the removal of as much excess inventory by September 30, 1996, as possible. In order to maximize charges to surplus publications expense in fiscal year 1996, the Superintendent of Documents and GPO’s Comptroller advised IMS staff to focus their attention on excessing publications that had high printing and binding costs, large quantities in inventory, and low sales volume. Also, during this major reduction, IMS decisions on what publications to excess did not receive the normal management review, and IMS staff did not call the issuing agencies to see whether they wanted the excess copies of their publications. Superintendent of Documents staff told us that they disregarded policy because they would not have had enough time to contact the issuing agencies and receive answers by September 30. According to these staff and Superintendent of Documents management officials, it would have been very difficult to contact all of the agencies involved with the 2,127 publications being excessed and to wait for their various responses concerning whether they wanted the excess copies. According to GPO, this response period usually takes about 4 weeks, and GPO officials did not believe that the agencies would be able to respond appropriately if given only a few days. According to Documents Control Branch staff, this disregard of policy resulted from the Superintendent’s June 1996 memorandum and his oral instructions to his staff regarding the formulation of the action plan to increase revenue and reduce expenses. The IMS responsible for handling congressional publications, and his supervisor in September 1996, acknowledged discussing between themselves whether they should follow GPO’s policy to offer excess publications to issuing agencies. They said that they felt they had the authority to dispose of the publications without notifying the issuing agencies because of time constraints and instructions from the Superintendent of Documents to disregard policies. They said that, given the Superintendent of Documents’ instructions, they saw no need to tell management officials above them that they were disregarding this policy. The IMS responsible for handling congressional publications told us that he made the decision on which publications to excess based primarily on the criteria he was given—high printing and binding costs, large quantity in inventory, and low projected sales. According to the IMS, his decisions on which publications to dispose of in September 1996 were not reviewed or approved by the Documents Survey Board, as generally would be required. The publications selected for excessing by the IMS were approved by his supervisor, but no one else’s approval was noted on the inventory records. The Superintendent of Documents said that he was responsible for policies not being followed and for the inventory reductions that took place at the end of fiscal year 1996. The Superintendent of Documents said that he wanted to dispose of the excess inventory by September 30, 1996, in order to take the losses in fiscal year 1996. He also said that he wanted to identify and dispose of as much excess inventory as possible in fiscal year 1996 rather than in later years, when it otherwise would have been identified, disposed of, and charged to expense. According to the Superintendent of Documents, he instructed staff to dispose of the excess inventory by September 30, 1996, because he mistakenly believed that the inventory had to be physically removed from GPO property before surplus publications expense could be charged. However, the inventory identified as excess by the IMS staff did not have to be disposed of by September 30, 1996, in order that the surplus publications expense could be charged to fiscal year 1996. Neither generally accepted accounting principles nor GPO’s own accounting procedures require physically removing the excessed publications from GPO property before surplus publications expense can be charged. Surplus publications expense can be charged whenever GPO staff determine that inventory is obsolete or unsalable. In fact, GPO had another major inventory stock reduction in fiscal year 1981, and at that time, according to GPO’s Comptroller, certain publications had been identified as excess but had not yet been disposed of when they were shown as an expense in GPO’s financial records. Both GPO’s Comptroller and the Superintendent of Documents agree that the latter misunderstood how publications expenses were handled in GPO’s accounting system at the time of the major inventory reduction in 1996. They both said that, at that time, GPO had no written guidance or instructions stating that excess inventory does not have to be physically removed from GPO before surplus publications expense can be charged. In July 1997, the Public Printer told us that, while he was notified that a major inventory reduction would be taking place in 1996, he was not made aware of the details of the reduction. He said that he did not know that the policy to offer excess publications to the issuing agency, which he had instituted when he was Superintendent of Documents, was not followed in the September 1996 reduction. As mentioned earlier, according to the IMS responsible for handling congressional publications, the decisions concerning which publications to excess were primarily based on the criteria of high printing and binding costs, large quantity in inventory, and projected sales. The IMS said that the Senate history volumes met these criteria for disposal. According to the IMS, he made his decision concerning the number of copies of the Senate history to retain based on an estimate of future sales, using a 10-year estimated life cycle for each of the four volumes. According to GPO’s records, the 10-year life cycle was developed when the volumes were first published, as a result of discussions involving the Senate Historian, House Historian, Joint Committee on Printing staff, and staff from GPO’s Documents Sales Service group. GPO records show that, of the inventory that was excessed, 3,258 copies, involving some of each of the four volumes written by Senator Byrd, were disposed of. The 3,258 copies were about 10 percent of the total number originally printed of the four volumes (32,386 at a total cost of $1,572,291). The printing and binding cost of the 3,258 excessed copies was about $83,000. The scrap value received for the shredded copies was about $600. See table 1 for more detail. According to GPO records, GPO retained 1,134 total copies of the Senate history, which GPO inventory management staff kept in inventory based on the estimated quantity needed to meet a sales demand calculated on what they initially agreed with representatives from the Senate Historian’s Office and others to be the life cycle for the publications. This life cycle was to be 10 years from the dates the volumes were published; their publication dates were 1988 (volume I), 1991 (volume II), 1994 (volume III), and 1993 (volume IV). Table 2 contains a breakdown of the disposition of the Senate history volumes, including the number on hand as of July 1997. A representative from GPO’s Congressional Printing Management Division in Customer Services told us that, in June 1996, he told the IMS responsible for handling congressional publications that the Senate Historian’s Office wanted any excess Senate history volumes that GPO might have. The responsible IMS said he knew that in the past the Senate Historian’s Office had inquired about the status of the Senate history volumes on several occasions and that, while he recalled the previous inquiries by the Senate Historian’s Office, he did not recall being told in June 1996 that the Senate Historian’s Office wanted any excess copies. He said that he proceeded with the inventory reduction based on the Superintendent of Documents’ instructions to disregard policies and ignore politics. Inventory records showed that he identified the copies as excess on September 6, 1996, and September 9, 1996. Warehouse records show that the copies were removed from the warehouse shelves for pickup by the scrap contractor on September 10, 1996, and September 12, 1996. All of the Superintendent of Documents staff we interviewed who were involved in the September 1996 inventory reduction said that no specific discussion of the Senate history volumes occurred during the September 1996 reduction. The Public Printer said he did not know at the time that the Senate history volumes were among those being excessed and that, if he had, those books would not have been disposed of. GPO has taken action or has actions in process that are aimed at helping to prevent a recurrence of a situation in which excess publications are disposed of without regard to established policies and procedures. While GPO’s initial actions could have helped prevent a recurrence, they did not appear to address all of the underlying causes of the problems associated with the September 1996 major inventory reduction. During the course of our review, we identified and brought to GPO’s attention several additional actions that we believed would address those causes. As discussed below, GPO officials agreed and took additional steps to prevent a recurrence. In May 6, 1997, and July 11, 1997, letters to Senator Byrd, the Public Printer said that GPO had made an error in disposing of the Senate history volumes and that all four volumes, because of their historical significance, would remain in print and available through the sales program indefinitely. According to the Superintendent of Documents, this action was carried out through oral instructions to his staff in July 1997. In response to these oral instructions, the IMS responsible for handling congressional publications wrote a note saying not to dispose of these volumes without top management’s approval and attached the note to the inventory control cards he maintained for these volumes. At our recommendation, the Superintendent of Documents put his oral instructions in writing in August 1997. In response to our inquiries, both the Public Printer and the Superintendent said that some publications, such as the Constitution and the Senate history volumes, should be kept indefinitely because of their historical significance. The Superintendent said that GPO did not have a systematic process for identifying or designating such publications but that, in response to our recommendation, GPO would develop a formal system for identifying publications that should remain in inventory indefinitely. In addition, he said that GPO was already developing a new inventory management system that would allow publications that are to be held indefinitely to be designated as such once they have been identified. The Superintendent of Documents also acknowledged that his lack of awareness about the planned disposal of the Senate history volumes contributed to their being excessed. On July 22, 1997, the Superintendent of Documents sent a memorandum to his staff stating that no further exceptions should be made to the current policy on excess, obsolete, damaged, and destroyed information products and that “excess stocks will be offered to the issuing agency.” On July 23, 1997, the Superintendent of Documents asked his staff to revise his formal policy document dated May 28, 1984, to address the problems that arose in connection with the September 1996 inventory reduction. According to the Superintendent of Documents, this revised policy will provide that excessed inventory should be charged to surplus publications expense when it is determined to be excess. The excessed inventory is then to be held in the warehouse for a reasonable period while issuing agencies are contacted to see if they want the excess publications. Under the Superintendent’s revised procedures, the policy of offering issuing agencies excess copies before their disposal cannot be waived. We pointed out that we saw no written statement in GPO’s policies, procedures, or guidance that specifically said that excessed inventory does not have to be physically removed from GPO’s warehouse before it can be charged to surplus publications expense. Both the Superintendent of Documents and the Comptroller agreed that the lack of such a written statement may have contributed to the misunderstanding that took place in 1996. In August 1997, GPO’s Comptroller prepared such a statement. Another action GPO has had in process for some time that could also help prevent a recurrence of the problems of the September 1996 reduction is the development of a new Integrated Processing System. The Superintendent of Documents expects this new system, which GPO plans to implement in October 1997, to provide his office with more flexibility in tracking inventory and better information for making decisions to excess publications. According to the Superintendent of Documents, the new system will (1) allow GPO to designate inventory as excess without physically relocating it in the warehouse, and (2) include a comment box where the IMS can indicate that a publication is not to be excessed or make other appropriate notations about its disposition. Until the new system is implemented, notations concerning holding copies indefinitely must be made on records that are maintained manually. Finally, another dilemma GPO has faced in disposing of excess inventory is the lack of authority to donate excess publications to schools or similar institutions. Under existing law and policy, GPO’s current options for disposing of excess publications are to offer them to issuing agencies at no cost or to dispose of them as scrap. GPO is also precluded by statute and regulation from offering publications to the public at discount prices except to those who buy 100 or more copies of the same publication or to book dealers who agree to resell the books at GPO’s prices—in which case, GPO can only offer a maximum discount of 25 percent. To address this problem, in May 1997, as part of its recommended revision of title 44 of the U. S. Code, GPO forwarded a proposal to the Joint Committee on Printing that would authorize the donation of excess publications to schools or similar institutions if the copies are not wanted by the issuing agency. In a May 6, 1997, letter to Senator Byrd, the Public Printer said that, on the basis of a study GPO had done, it was more cost-effective to maintain an adequate inventory of sales publications based on their projected life cycle and to reprint if necessary, than to hold excess copies of publications in inventory. According to the Superintendent of Documents, who drafted the May 6 letter for the Public Printer, the study cited in the letter referred to data supplied by the GPO’s Comptroller in 1996. These data showed that, overall, GPO’s inventory of excess publications was growing and was contributing to increasing charges to surplus publications expense. These increased charges were, in turn, contributing to a worsening financial situation for the sales program. To help remedy this problem, according to the Superintendent of Documents, the Comptroller recommended that the Superintendent identify as much excess inventory as possible in fiscal year 1996 to improve the sales program’s long-term financial situation. According to the Superintendent of Documents, the statement in the May 6 letter pertained to the typical publication, which he said has a printing and binding cost of about $2 per copy; it did not specifically pertain to the Senate history volumes, which had a printing and binding cost of $19 to $35 per copy. In this regard, we noted that the printing and binding cost of the 3,258 copies of the Senate history volumes disposed of was about $83,000, and that GPO’s estimated annual storage costs attributable to these copies was about $2,500. These figures can be compared to GPO’s estimated reprinting cost of about $210,000 should GPO reprint the copies disposed of, which it has agreed to do if necessary. During our review of GPO’s inventory management records, we also noted that Form 3880, which IMS staff use to make recommendations and supervisory personnel use to review actions on obsolete or excess inventory, does not provide for inclusion of data on storage or holding costs for publications. This omission is inconsistent with a memorandum, dated January 4, 1985, from the Chief, Sales Management Division, to Documents Control Branch staff, which directed that reasonable life cycles should be consistent with economic analysis of the following factors: expected trend, reprint costs, expected revision date, and holding costs. The memorandum also stated that, when reviewing records to identify excess or consider extension of the life cycle, the following factors should be considered: continued marketability, projected revenue, and estimated holding costs. We discussed this inconsistency with the Superintendent of Documents in August 1997. He said storage or holding costs are usually not significant, but he recognized that they should be considered in making decisions on excess inventory. He agreed to modify Form 3880 to incorporate consideration of such costs. To achieve its financial objective, GPO did not have to disregard policy and procedures for notifying the issuing agencies of excess publications. Because of the erroneous belief of the Superintendent of Documents, who heads GPO’s sales program, that GPO had to physically remove excess publications from the GPO warehouse by September 30, 1996, in order to record them as an expense for fiscal year 1996, and because of his express instruction to disregard policies and procedures, GPO staff disposed of about 2,100 different publications without first contacting the issuing agencies of those publications. As a result, 3,258 copies of the Senate history were destroyed, even though the Senate Historian’s Office had told a GPO representative that it wanted any excess copies. GPO has taken or plans to take actions that, if effectively implemented, should prevent this situation from recurring. We made various recommendations during the course of our work that GPO agreed to and either implemented the corrective action or is in the process of doing so. Thus, we are making no further recommendations. On September 5, 1997, we provided the Public Printer with a draft of this report for comment. We received his written comments, included in their entirety in appendix IV, on September 10, 1997. The Public Printer said that the report fairly represents the events as they occurred during the September 1996 inventory reduction. He also said that actions have been and are being taken to ensure that no sales publications will be disposed of in the future without strict adherence to applicable GPO policies and procedures. We are sending copies of this report to the Public Printer of the Government Printing Office and the Chairman and the Vice Chairman of the Joint Committee on Printing. We will make copies available to others upon request. Major contributors to this report are listed in appendix V. If you have any questions concerning this report, please call me on (202) 512-4232. As agreed with your offices, our objectives were to determine the facts surrounding the September 1996 inventory reduction; whether it followed existing policies and procedures; and the fate of the 3,258 copies of The Senate 1789-1989, a four-volume set written by Senator Byrd, that were destroyed as part of that reduction. In order to obtain information on GPO’s sales program and the inventory reduction held in September 1996, we reviewed pertinent documentation, such as GPO’s inventory control records, policies and procedures, memoranda, and financial records and reports. We interviewed the Public Printer, the Superintendent of Documents and his staff who were involved in the reduction, the Comptroller and his staff who are responsible for the financial records, and staff in the Congressional Printing and Management Division who serve as the liaison with the Senate Historian’s Office for Senate publications, including Senator Byrd’s books. We visited GPO’s Laurel, Maryland, warehouse where excessed publications are disposed of; reviewed its inventory disposition records; and interviewed a representative of GPO’s contractor that had picked up GPO’s excessed publications from its Laurel warehouse in September 1996. In addition, we reviewed GPO’s authority to donate surplus books. We coordinated our review with GPO’s Office of Inspector General. The representative from GPO’s contractor told us that his company did not maintain any records that would specifically show that the 3,258 Senate history copies were shredded. Therefore, we had to rely on GPO’s records and interviews with GPO staff and the contractor’s representative to determine what happened to the 3,258 Senate history copies that were excessed. Further, we did not verify GPO’s computerized inventory or financial records or do actual counts of the remaining stock inventory of the Senate history volumes at the Laurel warehouse. In addition to the Senate history volumes, we selected the following examples of excessed publications to provide a mix of publications from both the legislative and executive branches and, in some cases, to reflect publications having high dollar values. John S. Baldwin, Sr., Assistant Director Michael W. Jarvis, Evaluator-in-Charge Kiki Theodoropoulos, Senior Evaluator (Communications Analyst) Victor B. Goddard, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Government Printing Office's (GPO) procedures for managing its inventory of excess publications, particularly its management of a major inventory reduction that took place in September 1996, focusing on: (1) whether GPO followed existing policies and procedures; and (2) how 3,258 copies of The Senate 1789-1989, a four-volume set written by Senator Byrd, were destroyed as part of that reduction. GAO noted that: (1) when for the first time in 15 years a potential financial loss was identified in GPO's sales program in June 1996, the Superintendent of Documents, who heads the sales program, initiated several actions intended to improve the program's long-term financial condition; (2) the Superintendent of Documents said he wanted to dispose of the excess inventory by September 30, 1996, to take the losses in fiscal year (FY) 1996 rather than in later years, when it otherwise would have been identified, disposed of, and charged to expense; (3) the Superintendent of Documents also said he had erroneously believed that it was necessary to physically remove excess publications from inventory storage by September 30, 1996, in order to record them as an expense in the financial records for FY 1996; (4) although the Superintendent of Documents had policies and procedures in place to prevent the disposal of publications that the issuing agency still wanted, in June 1996 he instructed his staff to disregard those policies that would interfere with his goal of disposing of as much excess publications inventory as possible by September 30, 1996; (5) acting under the Superintendent's overall instructions, GPO sales program staff disregarded a policy that has existed since at least 1984, which provides that, before disposing of any excess copies of publications, GPO should offer them to the issuing agencies; (6) in explaining its inventory reduction to Senator Byrd, GPO said that it had found that it was generally more cost-effective to dispose of excess inventory and reprint if necessary, than to hold it in storage indefinitely; (7) however, GPO officials said that they knew that the reprint costs would substantially exceed the holding costs for these copies, given their relatively high printing and binding costs; (8) in July 1997, after Senator Byrd inquired about the major inventory reduction, the Superintendent of Documents orally instructed his staff to retain the remaining volumes of the Senate history and, at GAO's recommendation, put this instruction in writing in August 1997; and (9) the Superintendent further said that GPO was developing a new integrated processing system that would help designate publications that should not be excessed and, at GAO's recommendation, agreed to develop a systematic process for identifying publications to be held indefinitely for valid reasons. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
In November 2013, FAA released the Roadmap that describes its three- phased approach—Accommodation, Integration, and Evolution— to facilitate incremental steps toward its goal of seamlessly integrating UAS flight in the national airspace. Under this approach, FAA’s initial focus will be on safely allowing for the expanded operation of UASs by selectively accommodating some UAS use. In the integration phase, FAA plans to shift its emphasis toward integrating more UAS use once technology can support safe operations. Finally, in the evolution phase, FAA plans to focus on revising its regulations, policy, and standards based on the evolving needs of the airspace. Currently, FAA authorizes all UAS operations in the NAS—military, public (academic institutions and, federal, state, and local governments including law enforcement organizations), and civil (commercial). Federal, state, and local government agencies must apply for Certificates of while civil operators must apply for Waiver or Authorization (COA),special airworthiness certificates in the experimental category. Civil operators may also apply for an exemption, under section 333 of the 2012 Act, Special Rules for Certain Unmanned Aircraft Systems. This requires the Secretary of Transportation to determine if certain UAS may operate safely in the NAS prior to the completion of UAS rulemakings. This also gives the Secretary the authority to determine whether to allow certain UAS aircraft to operate in the NAS without an airworthiness certification. As we previously reported, research and development continue in areas related to a UAS’s ability to detect and avoid other aircraft, as well as in command and control technologies and related performance and safety standards that would support greater UAS use in the national airspace. Some of this research is being conducted by DOD and NASA. Until this research matures most UAS operations will remain within visual line of sight of the UAS operator. Foreign countries are experiencing an increase in UAS use, and some have begun to allow commercial entities to fly UASs under limited circumstances. According to industry stakeholders, easier access to these countries airspace has drawn the attention of some U.S. companies that wish to test their UASs without needing to adhere to FAA’s administrative requirements for flying UASs at one of the domestically located test sites, or obtaining an FAA COA. As we most recently reported in February 2014, the 2012 Act contained provisions designed to accelerate the integration of UAS into the NAS. These provisions outlined 17 date specific requirements and set deadlines for FAA to achieve safe UAS integration by September 2015 (See app. 1). While FAA has completed several of these requirements, some key ones, including the publication of the final small UAS rule, remain incomplete. As of December 2014, FAA had completed nine of the requirements, was in the process of addressing four, and had not yet made progress on four others. Some stakeholders told us in interviews that FAA’s accomplishments to date are significant and were needed, but these stakeholders noted that the most important provisions of the 2012 Act have been significantly delayed or are unlikely to be achieved by the mandated dates. Both the FAA and UAS industry stakeholders have emphasized the importance of finalizing UAS regulations as unauthorized UAS operations in the national airspace continue to increase and present a safety risk to commercial and general aviation activities. Before publication of a final rule governing small UAS, FAA must first issue a Notice of Proposed Rulemaking (NPRM). As we previously reported, the small UAS rule is expected to establish operating and performance standards for a UAS weighing less than 55 pounds, operating under 400 feet, and within line of sight. FAA officials told us in November 2014 that FAA is hoping to issue the NPRM by the end of 2014 or early 2015. According to FAA, its goal is to issue the final rule 16 months after the NPRM. If this goal is met, the final rule would be issued in late 2016 or early 2017, about two years beyond the requirement of the congressional mandate. However, during the course of our ongoing work, FAA told us that it is expecting to receive tens of thousands of comments on the NPRM. The time needed to respond to such a large number of comments could further extend the time to issue a final rule. FAA officials told us that it has taken a number of steps to develop a framework to efficiently process the comments it expects to receive. Specifically, they said that FAA has a team of employees assigned to lead the effort with contractor support to track and categorize the comments as soon as they are received. According to FAA officials, the challenge of addressing comments could be somewhat mitigated if industry groups consolidated comments, thus reducing the total number of comments that FAA must be addressed while preserving content. During our ongoing work, one industry stakeholder has expressed concern that the small UAS rule may not resolve issues that are important for some commercial operations. This stakeholder expects the proposed rule to authorize operations of small UASs only within visual line of sight of the remote operator and to require the remote operator to have continuous command and control throughout the flight. According to this stakeholder, requiring UAS operators to fly only within their view would prohibit many commercial operations, including large-scale crop monitoring and delivery applications. Furthermore, they formally requested that FAA establish a new small UAS Aviation Rulemaking Committee (ARC) with the primary objective to propose safety regulations and standards for autonomous UAS operations and operations beyond visual line of sight. According to FAA, the existing UAS ARC recently formed a workgroup to study operations beyond visual line of sight in the national airspace and to specifically look at the near- and long-term issues for this technology. In November 2013, FAA completed the required 5-year Roadmap, as well as, the Comprehensive Plan for the introduction of civil UAS into the NAS. The Roadmap was to be updated annually and the second edition of the Roadmap was scheduled to be published in November 2014. Although FAA has met the congressional mandate in the 2012 Act to issue a Comprehensive Plan and Roadmap to safely accelerate integration of civil UAS into the NAS, that plan does not contain details on how it is to be implemented, and it is therefore uncertain how UASs will be safely integrated and what resources this integration will require. The UAS ARC emphasized the need for FAA to develop an implementation plan that would identify the means, necessary resources, and schedule to safely and expeditiously integrate civil UAS into the NAS. According to the UAS ARC the activities needed to safely integrate UAS include: identifying gaps in current UAS technologies, regulations, standards, policies, or procedures; developing new technologies, regulations, standards, policies, and identifying early enabling activities to advance routine UAS operations in the NAS integration, and developing guidance material, training, and certification of aircraft, enabling technologies, and airmen (pilots). FAA has met two requirements in the 2012 Act related to the test sites by setting them up and making a project operational at one location. In our 2014 testimony, we reported that in December 2013, 16 months past the deadline, FAA selected six UAS test ranges. Each of these test sites became operational, during our ongoing work, between April and August 2014, operating under an Other Transaction Agreement (OTA) with FAA. These test sites are affiliated with public entities, such as a university, and were chosen, according to FAA during our ongoing work, based on a number of factors including geography, climate, airspace use, and a proposed research portfolio that was part of the application. Each test site operator manages the test site in a way that will give access to other parties interested in using the site. According to FAA, its role is to ensure each operator sets up a safe testing environment and to provide oversight that guarantees each site operates under strict safety standards. FAA views the test sites as a location for industry to safely access the airspace. FAA told us, during our ongoing work that they expect data obtained from the users of the test ranges will contribute to the continued development of standards for the safe and routine integration of UAS. In order to fly under a COA the commercial entity leases its UAS to the public entity for operation. the research and development supporting integration. According to FAA, it cannot direct the test sites to address specific research and development issues, nor specify what data to provide FAA, other than data required by the COA. FAA officials told us that some laws may prevent the agency from directing specific test site activities without providing compensation. As a result, according to some of the test site operators we spoke to as part of our ongoing work, there is uncertainty about what research and development should be conducted to support the integration process. However, FAA states it does provide support through weekly conference calls and direct access for test sites to FAA’s UAS office. This level of support requires time and resources from the FAA, but the staff believes test sites are a benefit to the integration process and worth this investment. In order to maximize the value of the six test ranges, FAA is working with MITRE Corporation (MITRE), DOD, and the test sites to define what safety, reliability, and performance data are needed and develop a framework, including procedures, for obtaining and analyzing the data. However, FAA has not yet established a time frame for developing this framework. During our ongoing work, test site operators have told us that there needs to be incentives to encourage greater UAS operations at the test sites. FAA is, however, working on providing additional flexibility to the test sites to encourage greater use by industry. Specifically, FAA is willing to train designated airworthiness representatives for each test site. These individuals could then approve UASs for a special airworthiness certificate in the experimental category for operation at the specific test site. Test site operators told us that industry has been reluctant to operate at the test sites because under the current COA process, a UAS operator has to lease its UAS to the test site, thus potentially exposing proprietary technology. With a special airworthiness certificate in the experimental category, the UAS operator would not have to lease their UAS to the test site, therefore protecting any proprietary technology. According to FAA and some test site operators, another flexibility they are working on is a broad area COA that would allow easier access to the test site’s airspace for research and development. Such a COA would allow the test sites to conduct the airworthiness certification, typically performed by FAA, and then allow access to the test site’s airspace. FAA has started to use the authority granted under section 333 of the 2012 Act to allow small UASs access to the national airspace for commercial purposes, after exempting them from obtaining an airworthiness certification. While FAA continues to develop a regulatory framework for integrating small UASs into the NAS these exemptions can help bridge the gap between the current state and full integration. According to FAA, this framework could provide UAS operators that wish to pursue safe and legal entry into the NAS a competitive advantage in the UAS marketplace, thus discouraging illegal operations and improving safety. During our ongoing work, FAA has granted seven section 333 exemptions for the filmmaking industry as of December 4, 2014. FAA officials told us that there were more than 140 applications waiting to be reviewed for other industries, for uses such as precision agriculture and electric power line monitoring, and more continue to arrive. (See figure 1 for examples of commercial UAS operations.) While these exemptions do allow access to the NAS, FAA must review and approve each application and this process takes time, which can affect how quickly the NAS is accessible to any given commercial applicant. According to FAA, the section 333 review process is labor intensive for its headquarters staff because most certifications typically occur in FAA field offices; however, since exemptions under section 333 are exceptions to existing regulations, this type of review typically occurs at headquarters. FAA officials stated that to help mitigate these issues, it is grouping and reviewing similar types of applications together and working to streamline the review process. While FAA is making efforts to improve and accelerate progress toward UAS integration, additional challenges remain, including in the areas of authority, resources, and potential leadership changes. As we reported in February 2014, the establishment of the UAS Integration office was a positive development because FAA assigned an Executive Manager and combined UAS-related personnel and activities from the agency’s Aviation Safety Organization and Air Traffic Organization. However, some industry stakeholders we have interviewed for our ongoing work have expressed concerns about the adequacy of authority and resources that are available to the office. A UAS rulemaking working group, comprised of both government and industry officials, recently recommended that the UAS Integration Office be placed at a higher level within FAA in order to have the necessary authority and access to other FAA lines of business and offices. In addition, according to FAA officials, the Executive Manager’s position may soon be vacant. Our previous work has found that complex organizational transformations involving technology, systems, and retraining key personnel—such as NextGen another FAA major initiative—require substantial leadership commitment over a sustained period. We also found that leaders must be empowered to make critical decisions and held accountable for results. Several federal agencies and private sector stakeholders have research and development efforts under way to develop technologies that are designed to allow safe and routine UAS operations. As we have previously reported, agency officials and industry experts told us that these research and development efforts cannot be completed and validated without safety, reliability, and performance standards, which have not yet been developed because of data limitations. On the federal side, the primary agencies involved with UAS integration are those also working on research and development, namely, FAA, NASA, and DOD. FAA uses multiple mechanisms—such as cooperative research and development agreements (CRDA), federally funded research and development centers (FFRDC), and OTAs (discussed earlier in this statement)—to support its research and development efforts. In support of UAS integration, FAA has signed a number of CRDAs with academic and corporate partners. For example, FAA has CRDAs with CNN and BNSF Railway to test industry-specific applications for news coverage and railroad inspection and maintenance, respectively. Other CRDAs have been signed with groups to provide operational and technical assessments, modeling, demonstrations, and simulations. Another mechanism used by FAA to generate research and development for UAS integration are FFRDCs. For example, MITRE Corporation’s Center for Advanced Aviation System Development is an FFRDC supporting FAA and the UAS integration process. Specifically, MITRE has ongoing research and development supporting air traffic management for UAS detection and avoidance systems, as well as other technologies. FAA has cited many accomplishments in research and development in the past fiscal year, as we were conducting our ongoing work. According to FAA, it has made progress in areas related to detect and avoid technologies supporting ongoing work by RTCA Special Committee Other areas of focus and progress by FAA include command and 228.control, as well as operations and approval. According to FAA, progress for command and control was marked by identifying challenges for UAS operations using ground-to-ground communications. FAA also indicated, during our ongoing work, that it conducted simulations of the effects of UAS operations on air traffic management. Furthermore, in support of research and development efforts in the future, FAA solicited for bids for the development of a Center of Excellence. The Center of Excellence is expected to support academic UAS research and development for many areas including detect and avoid, and command and control technologies. FAA expects to announce the winner during fiscal year 2015. We have previously reported that NASA and DOD have extensive research and development efforts supporting integration into the NAS.NASA has a $150-million project focused on UAS integration into the NAS. NASA officials stated that the current goal of this program is to conduct research that reduces technical barriers associated with UAS integration into the NAS, including conducting simulations and flight testing to test communications requirements and aircraft separation, among other issues. DOD has research and development efforts primarily focused on airspace operations related to detect and avoid systems. However, DOD also contributes to research and development focused on certification, training, and operation of UAS. We reported in 2012 that outside the federal government, several academic and private sector companies are conducting research in support of advancing UAS integration. Research by both groups focuses on various areas such as detect and avoid technologies, sensors, and UAS materials. For example, several private sector companies have developed technologies for visual sensing and radar sensing. Academic institutions have conducted extensive research into the use of various technologies to help the maneuverability of UASs. A number of countries allow commercial UAS operations under some restrictions. A 2014 study, conducted by MITRE for FAA, revealed that Japan, Australia, United Kingdom, and Canada have progressed further than the United States with regulations supporting integration. In fact, Japan, the United Kingdom, and Canada have regulations in place allowing some small UAS operations for commercial purposes. According to this study, these countries’ progress in allowing commercial access in the airspace may be attributed to differences in the complexity of their aviation environment. Our preliminary observations indicate that Japan, Australia, United Kingdom, and Canada also allow more commercial UAS operations than the United States. According to the MITRE study, the types of commercial operations allowed vary by country. For example, as of December 2014, Australia had issued over 180 UAS operating certificates to businesses engaged in aerial surveying, photography, and other lines of business. Furthermore, the agriculture industry in Japan has used UAS to apply fertilizer and pesticide for over 10 years. Several European countries have granted operating licenses to more than 1,000 operators to use UASs for safety inspections of infrastructure, such as rail tracks, or to support the agriculture industry. While UAS commercial operations can occur in other countries, there are restrictions controlling their use. For example, the MITRE study showed that several of the countries it examined require some type of certification and approval to occur before operations. Also, restrictions may require operations to remain within line of sight and below a certain altitude. In Australia, according to the MITRE study, commercial operations can occur only with UASs weighing less than 4.4 pounds. However, the rules governing UASs are not consistent worldwide, and while some countries, such as Canada, are easing restrictions on UAS operations, other countries, such as India, are increasing UAS restrictions. For our ongoing work, we spoke with representatives of the aviation authority in Canada (Transport Canada) to better understand UAS use and recently issued exemptions. In Canada, regulations governing the use of UAS have been in place since 1996. These regulations require that UAS operations apply for and receive a Special Flight Operations Certificate (SFOC). The SFOC process allows Canadian officials to review and approve UAS operations on a case-by-case basis if the risks are managed to an acceptable level. This is similar to the COA process used in the United States. As of September 2014, over 1,000 SFOCs had been approved for UAS operations this year alone. Canada issued new rules for UAS operations on November 27, 2014. Specifically, the new rules create exemptions for commercial use of small UASs weighing 2 kilograms (4.4 pounds) or less and between 2.1 kilograms to 25 kilograms (4.6 pounds to 55 pounds). UASs in these categories can commercially operate without a SFOC but must still follow operational restrictions, such as a height restriction and a requirement to operate within line of sight. Transport Canada officials told us this arrangement allows them to use scarce resources to regulate situations of relatively high risk. For example, if a small UAS is being used for photography in a rural area, this use may fall under the new criteria of not needing an SFOC, thus providing relatively easy access for commercial UAS operations. Finally, our ongoing work has found that FAA interacts with a number of international bodies in an effort to harmonize UAS integration across countries. According to FAA officials, the agency’s most significant contact in Europe has been with the Joint Authorities for Rulemaking for Unmanned Systems (JARUS). JARUS is a group of experts from the National Aviation Authorities (NAAs) and the European Aviation Safety Agency. A key aim of JARUS is to develop recommended certification specifications and operational provisions, which countries can use during the approval process of a UAS. In addition, FAA participated in ICAO’s UAS Study Group, an effort to harmonize standards for UAS. ICAO is the international body that, among other things, promotes harmonization in international standards. ICAO plans to release its UAS manual in March 2015, which will contain guidance about UAS integration for the states. Additional international groups that FAA interacts with in support of UAS integration include the Civil Air Navigation Services Organization, European Organization for Civil Aviation Equipment, and North Atlantic Treaty Organization. Chairman LoBiondo, Ranking Member Larsen, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. Appendix I: Selected Requirements and Status for UAS Integration under the FAA Modernization and Reform Act of 2012, as of December 2014 FAA Modernization and Reform Act of 2012 requirement Status of action Enter into agreements with appropriate government agencies to simplify the process for issuing COA or waivers for public UAS. In process – MOA with DOD signed Sept. 2013; MOA with DOJ signed Mar. 2013; MOA with NASA signed Mar. 2013; MOA with DOI signed Jan. 2014; MOA with DOD’s Director of Test & Evaluation signed Mar. 2014; MOA with NOAA still in draft. Expedite the issuance of COA for public safety entities Establish a program to integrate UAS into the national airspace at six test ranges. This program is to terminate 5 years after date of enactment. Develop an Arctic UAS operation plan and initiate a process to work with relevant federal agencies and national and international communities to designate permanent areas in the Arctic where small unmanned aircraft may operate 24 hours per day for research and commercial purposes. Determine whether certain UAS can fly safely in the national airspace before the completion of the Act’s requirements for a comprehensive plan and rulemaking to safely accelerate the integration of civil UASs into the national airspace or the Act’s requirement for issuance of guidance regarding the operation of public UASs including operating a UAS with a COA or waiver. Develop a comprehensive plan to safely accelerate integration of civil UASs into national airspace. Issue guidance regarding operation of civil UAS to expedite COA process; provide a collaborative process with public agencies to allow an incremental expansion of access into the national airspace as technology matures and the necessary safety analysis and data become available and until standards are completed and technology issues are resolved; facilitate capability of public entities to develop and use test ranges; provide guidance on public entities’ responsibility for operation. Make operational at least one project at a test range. Approve and make publically available a 5-year roadmap for the introduction of civil UAS into national airspace, to be updated annually. Submit to Congress a copy of the comprehensive plan. Publish in the Federal Register the Final Rule on small UAS. In process Publish in the Federal Register a Notice of Proposed Rulemaking to implement recommendations of the comprehensive plan. Publish in the Federal Register an update to the Administration’s policy statement on UAS in Docket No. FAA-2006-25714. Achieve safe integration of civil UAS into the national airspace. In process FAA Modernization and Reform Act of 2012 requirement Status of action Publish in the Federal Register a Final Rule to implement the recommendations of the comprehensive plan. Develop and implement operational and certification requirements for public UAS in national airspace. For further information on this testimony, please contact Gerald L. Dillingham, Ph.D., at (202)512-2834 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Brandon Haller, Assistant Director; Melissa Bodeau, Daniel Hoy, Eric Hudson, and Bonnie Pignatiello Leer. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | UASs are aircraft that do not carry a pilot aboard, but instead operate on pre-programmed routes or are manually controlled by following commands from pilot-operated ground control stations. The FAA Modernization and Reform Act of 2012 put greater emphasis on the need to integrate UASs into the national airspace by requiring that FAA establish requirements governing them. FAA has developed a three-phased approach in its 5-year Roadmap to facilitate incremental steps toward seamless integration. However, in the absence of regulations, unauthorized UAS operations have, in some instances, compromised safety. This testimony discusses 1) progress toward meeting UAS requirements from the 2012 Act, 2) key efforts underway on research and development, and 3) how other countries have progressed in developing UAS use for commercial purposes. This testimony is based on GAO's prior work and an ongoing study examining issues related to UAS integration into the national airspace system for civil and public UAS operations. The Federal Aviation Administration (FAA) has made progress toward implementing the requirements defined in the FAA Modernization and Reform Act of 2012 (the 2012 Act). As of December 2014, FAA had completed 9 of the 17 requirements in the 2012 Act. However, key requirements, such as the final rule for small unmanned aerial systems (UAS) operations, remain incomplete. FAA officials have indicated that they are hoping to issue a Notice of Proposed Rulemaking soon, with a timeline for issuing the final rule in late 2016 or early 2017. FAA has established the test sites as required in the Act, sites that will provide data on safety and operations to support UAS integration. However, some test site operators are uncertain about what research should be done at the site, and believe incentives are needed for industry to use the test sites. As of December 4, 2014, FAA granted seven commercial exemptions to the filmmaking industry allowing small UAS operations in the airspace. However, over 140 applications for exemptions were waiting to be reviewed for other commercial operations such as electric power line monitoring and precision agriculture. Previously, GAO reported that several federal agencies and private sector stakeholders have research and development efforts under way focusing on technologies to allow safe and routine UAS operations. During GAO's ongoing work, FAA has cited many accomplishments in research and development in the past fiscal year in areas such as detect and avoid, and command and control. Other federal agencies also have extensive research and development efforts supporting safe UAS integration, such as a National Aeronautics and Space Administration (NASA) project to provide research that will reduce technical barriers associated with UAS integration. Academic and private sector companies have researched multiple areas related to UAS integration. GAO's ongoing work found that other countries have progressed with UAS integration and allow limited commercial use. A 2014 MITRE study found that Japan, Australia, the United Kingdom, and Canada have progressed further than the United States with regulations that support commercial UAS operations. For example, as of December 2014, Australia had issued 180 UAS operating certificates to businesses in industries including aerial surveying and photography. In addition, Canada recently issued new regulations exempting commercial operations of small UASs weighing 25 kilograms (55 lbs.) or less from receiving special approval. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (PRWORA) replaced the individual entitlement to benefits under the 61-year-old Aid to Families with Dependent Children (AFDC) program with TANF block grants to states and emphasized the transitional nature of assistance and the importance of reducing welfare dependence through employment. Administered by HHS, TANF provides states with $16.5 billion each year, and in fiscal 2002, the total TANF caseload consisted of 5 million recipients. PRWORA provides states with the flexibility to set a wide range of TANF program rules, including the types of programs and services available and the eligibility criteria for them. States may choose to administer TANF directly, devolve responsibility to the county or local TANF offices, or contract with nonprofit or for-profit providers to administer TANF. Some states have also adopted “work first” programs, in which recipients typically are provided orientation and assistance in searching for a job; they may also receive some readiness training. Only those unable to find a job after several weeks of job search are then assessed for placement in other activities, such as remedial education or vocational training. While states have great flexibility to design programs that meet their own goals and needs, they must also meet several federal requirements designed to emphasize the importance of work and the temporary nature of TANF aid. For example, TANF established stronger work requirements for those receiving cash benefits than existed under AFDC. Furthermore, to avoid financial penalties, states must ensure that a steadily rising specified minimum percentage of adult recipients are participating in work or work-related activities each year. To count toward the state’s minimum participation rate, adult TANF recipients in families must participate in a minimum number of hours of work or a work-related activity a week, including subsidized or unsubsidized employment, work experience, community service, job search, providing child care for other TANF recipients, and (under certain circumstances) education and training. If recipients refuse to participate in work activities as required, states must impose a financial sanction on the family by reducing the benefits, or they may opt to terminate the benefits entirely. States must also enforce a 60-month limit (or less at state option) on the length of time a family may receive federal TANF assistance, although the law allows states to provide assistance beyond 60 months using state funds. The TANF caseload includes, as did AFDC, low-income individuals with physical or mental impairments considered severe enough to make them eligible for the federal SSI program. Administered by SSA, SSI is a means- tested income assistance program that provides essentially permanent cash benefits for individuals with a medically determinable physical or mental impairment that has lasted or is expected to last at least 1 year or to result in death and prevents the individual from engaging in substantial gainful activity. To qualify for SSI, an applicant’s impairment must be of such severity that the person is not only unable to do previous work but is also unable to do any other kind of substantial gainful work that exists in the national economy. Work is generally considered substantial and gainful if the individual’s earnings exceed a particular level established by statute and regulations. SSA also administers the Disability Insurance program (DI), which uses the same definition of disability, but is not means-tested and requires an individual to have a sufficient work history. For both DI and SSI, SSA uses the Disability Determination Service (DDS) offices to make the initial eligibility determinations. If the individual is not satisfied with this determination, he or she may request a reconsideration of the decision with the same DDS. Another DDS team will review the documentation in the case file, as well as any new evidence, and determine whether the individual meets SSA’s definition of disability. If the individual is not satisfied with the reconsideration, he or she may request a hearing before an Administrative Law Judge (ALJ). The ALJ conducts a new review and may hear testimony from the individual, medical experts, and vocational experts. If the individual is not satisfied with the ALJ decision, he or she may request a review by SSA’s Appeals Council, which is the final administrative appeal within SSA. Despite recent improvements to the process, going through the entire process, including all administrative appeals, can average over 2 years. In most states, SSI eligibility also entitles individuals to Medicaid benefits. TANF recipients may apply for Medicaid benefits and are likely to qualify, but receipt of TANF benefits does not automatically qualify a recipient for Medicaid. While SSA has recently expanded policies and initiated demonstration projects aimed at helping DI and SSI beneficiaries enter or return to the workforce and achieve or at least increase self-sufficiency, its disability programs remain grounded in an approach that equates impairment with inability to work. This approach exists despite medical advances and economic and social changes that have redefined the relationship between impairment and the ability to work. The disconnect between SSA’s program design and the current state of science, medicine, technology, and labor market conditions, along with similar challenges in other programs, led GAO in 2003 to designate modernizing federal disability programs, including DI and SSI, as a high-risk area urgently needing attention and transformation. The Ticket to Work and Work Incentives Improvement Act of 1999 amended the Social Security Act to create the Ticket to Work and Self- Sufficiency Program (Ticket Program). This program provides most DI and SSI beneficiaries with a voucher, or “ticket,” which they can use to obtain vocational rehabilitation, employment, or other return-to-work services from an approved provider of their choice. The program, while voluntary, is only available to beneficiaries after the lengthy eligibility determination process. Once an individual receives the ticket, he or she is free to choose whether or not to use it, as well as when to use it. Generally, disability beneficiaries age 18 through 64 are eligible to receive tickets. The Ticket Program has been implemented in phases and is to be fully implemented in 2004. The Social Security Advisory Board (Advisory Board) has questioned whether Social Security’s definition of disability is appropriately aligned with national disability policy. The definition of disability requires that individuals with impairments be unable to work, but then once found eligible for benefits, individuals receive positive incentives to work. Yet the disability management literature has emphasized that the longer an individual with an impairment remains out of the workforce the more likely the individual is to develop a mindset of not being able to work and the less likely the individual is to ever return to work. Having to wait for return-to-work services until determined eligible for benefits may be inconsistent with the desire of some individuals with impairments who want to work but still need financial and medical assistance. The Advisory Board, in recognizing that these inconsistencies need to be addressed, has suggested some alternative approaches. One option they discussed in a recent report is to develop a temporary program, which would be available while individuals with impairments were waiting for eligibility determinations for the current program. This temporary program might have easier eligibility rules and different cash benefit levels but stronger and more individualized medical and other services needed to support a return to work. SSA has also realized that one approach may not work for all beneficiaries, and in recent years it has begun to develop different approaches for providing assistance to individuals with disabilities. One example of these efforts is the proposed Temporary Allowance Demonstration, which would provide immediate cash and medical benefits for a specified period to individuals who meet SSA’s definition of disability and who are highly likely to benefit from aggressive medical care. SSA is also in the process of developing the Early Intervention Demonstration. This demonstration project will test alternative ways to provide employment-related services to disability applicants. Although both of these demonstration projects only cover the DI program, SSA also has the authority to conduct other demonstration projects with SSI applicants and recipients. Estimates from our nationwide survey of county TANF offices indicated that almost all offices reported that they refer at least some recipients with impairments to apply for SSI. But the level of encouragement these individuals receive from their local TANF office to apply for SSI varies, with many offices telling the individual to apply for SSI and some offices helping the recipient complete the application. Because TANF offices are referring individuals to SSI, these referrals will have some effect on the SSI caseload. However, findings regarding the impact that these SSI referrals from TANF have on SSI caseload growth are inconclusive, due to data limitations. Based on estimates from our survey, 97 percent of all counties refer at least some of their adult TANF recipients with impairments to SSA to apply for SSI. As table 1 shows, 33 percent of county TANF offices said that it is their policy to refer to SSI only those adults whose impairments are identified as limiting or preventing their ability to work. However, another 32 percent of county TANF offices said that it is their policy to refer all TANF recipients identified with impairments to SSI for eligibility determinations. TANF offices reported that they rely on several methods to identify an individual’s impairment and assess whether the individual could work or should be referred to SSI. Estimates from our survey indicated that all county offices rely on the applicant to disclose his or her impairment. In addition, 96 percent of all counties rely on caseworker observation, about 57 percent use a screening tool, and about 60 percent use an intensive assessment. Once recipients are identified as having impairments, TANF offices need to decide which individuals to refer to SSI. As table 2 shows, many counties rely on multiple forms of documentation or other information to make this decision, rather than referring all individuals with impairments. Specifically, 94 percent of all counties reported that they use documentation from a recipient’s physician, and 95 percent reported that they use self-reported information from the recipient. While nearly all county TANF offices reported that they refer at least some individuals with impairments to SSI, the level of encouragement such individuals receive from their local TANF office appears to vary. About 98 percent of county TANF offices reported that they tell these recipients to call or go to SSA to apply for SSI. About 61 percent reported that they will also assist a recipient in completing the SSI application, and about 74 percent reported that they follow up to ensure the application process is complete. Some of the variation in the level of encouragement may be explained by the fact that some states are work first states. Officials we interviewed in four states acknowledged that they try to get all TANF recipients to work, including recipients with impairments. Therefore, while they make referrals to SSI, officials in these work first states told us that they try to encourage work more than the SSI application process. However, officials in all five of the states we visited stated that if they feel an individual has a severe impairment, they would have the individual apply for SSI. Since county TANF offices refer individuals with impairments to SSI, these referrals will have some effect on the SSI caseload. To determine the magnitude of the effect that these TANF referrals have had on SSI caseload growth, SSA would need to know who among their applicants are TANF recipients. However, SSA headquarters officials told us that the agency does not know who is referred or how people are referred because it does not collect those data. Although the SSI application specifically asks whether the applicant is receiving TANF, this information is combined with other income assistance based on need in SSA’s database. Therefore, while the working age (18-64) SSI caseload has increased 33 percent over the last decade, SSA does not have an easy way to accurately determine the magnitude of the effect that the TANF referrals have had on the growth of the SSI rolls. Also, in a study funded by SSA and conducted by The Lewin Group, researchers found little, if any, evidence that TANF had increased referrals to SSI. Only one of the five states the researchers visited remarked of a perceptible increase in transitions to SSI. The authors noted that the likely reason for not finding a significant increase in referrals due to welfare reform is the fact that referrals to SSI had already been occurring under AFDC, and that the full impact of the welfare reform changes would not be known until the time limit for benefit receipt had elapsed. However, to date there have not been any studies that looked at this issue. In addition to SSA not knowing the magnitude of the effect that TANF referrals have had on SSI caseload growth, TANF officials we interviewed stated that they generally do not have historical data on SSI referrals, approvals, and denials. But officials in most states that we visited said they are in the process of improving their data collection in this respect, including tracking methods to determine the status of an SSI application, which should provide them with better data in the future. TANF offices vary in whether they make work requirements mandatory for their adult recipients with impairments awaiting SSI eligibility determinations. Even though estimates from our survey showed that 83 percent of county TANF offices reported offering noncash services to TANF recipients with impairments who are awaiting SSI eligibility determinations, these services may not be available or are not fully utilized. Reasons for this low service utilization may include exemptions from the work requirements and an insufficient number of job training or related services. Estimates from our survey showed that about 86 percent of county TANF offices have policies that always or sometimes exempt from the work requirements adult TANF recipients with impairments who are referred to SSI for eligibility determinations. Also, about 31 percent of county TANF offices consider the number of times a recipient is denied and appeals an SSI decision as a factor when deciding to exempt recipients from the work requirements. Our survey further found that 82 percent of counties reported exempting recipients, in part, on the basis of the degree to which the impairment limits the recipient’s ability to work. In addition, about 69 percent of county TANF offices reported that the severity of the impairment was a major factor in their decisions to exempt people with impairments who are awaiting SSI determinations from work requirements. One TANF official we interviewed told us that the recipients’ impairments were too great to participate in work activities. However, some of the state and county TANF officials we interviewed explained that they have developed alternative practices to help recipients with impairments participate in work activities. TANF officials from two of the states we visited told us that they have developed a modified work requirement for adult TANF recipients with impairments. A TANF official from one of these states said that the modified work requirements encourage individuals with impairments to work, but they do not expect that these individuals will be able to work in a full-time capacity. One county TANF official we interviewed explained that the work requirements and services provided for their recipients with impairments are very individualized, based on recommendations of the doctors who meet with the recipients. However, in all of the states and counties we visited, TANF officials said that individualized services can be costly. One state official said that his state’s program does not have the funds to pay for the training needed by people with learning disabilities. The official added that when people with impairments need substantial help, there were limits as to what could be funded in a work first state. Even though about 51 percent of county TANF offices do not require adult TANF recipients awaiting SSI determinations to participate in any type of job services, education services, work experience programs, or other employment services, 83 percent of county TANF offices reported that they are still willing to provide work-related or support services to this population. One state official we interviewed reported that the services provided are the same for persons with or without impairments. Officials in this state explained that these services include transportation, child care, medical assistance, tuition assistance, vocational rehabilitation, and assistance with obtaining SSI benefits. Even though county TANF offices may be willing to offer noncash services to their recipients, among those counties that could provide us with information on service utilization, utilization of these services tended to be low. While the low utilization of services may be due to exemptions from the work requirements, service availability may also be an issue. Estimates from our survey showed that 40 percent of county TANF offices reported one of the reasons adult TANF recipients with impairments, who are awaiting SSI eligibility determinations, are not participating in work activities is that there are an insufficient number of job training or related services available for them to use. In addition, some TANF officials that we interviewed cited not only limited funding, but also their offices’ own TANF policies as factors that might explain why services may not be available to recipients with impairments. For example, a state TANF official we interviewed said that state budget cuts have resulted in trimming of support services made available to recipients. Another state official explained that adult recipients with impairments who are placed in an exempted status are allowed access to medical services but not work- related support services, such as transportation, clothing, or vehicle repairs. The official further explained that those services are limited to those individuals who are in work activities. In addition, estimates from our survey showed that 50 percent of county TANF offices reported recipients’ motivation to apply for SSI was one of the conditions that might challenge or hinder their offices in providing employment services. Some state and county TANF officials we interviewed also believe that one of the main reasons why there is low utilization of services is recipients’ fear of jeopardizing their SSI applications. While participation in a work activity does not necessarily preclude an individual from obtaining disability benefits from SSA, estimates from our survey showed that 41 percent of county TANF offices reported that their recipients with impairments, awaiting SSI eligibility determinations, are unsure whether or not the demonstration of any work ability would hinder or disqualify their chances for SSI eligibility. State and county TANF officials we interviewed explained that recipients applying for SSI or awaiting an SSI decision fear participating in work activities. Some of the county TANF officials we interviewed explained that this population does not want to participate in work-related services for fear of jeopardizing their applications. These officials noted that compounding recipients’ fears are attorneys who may be attempting to protect their clients’ interests by sending TANF offices notices saying that any work activity could jeopardize their clients’ SSI applications. These fears have led to TANF workers having some difficulty in getting their recipients with impairments to explore work options during the time they are applying for SSI. One state TANF official we interviewed pointed out that conversations with their recipients about work activities have generally occurred because the recipients want to volunteer for such activities. A county TANF official explained that there is a challenge in providing work services to this population, as the recipients are so focused on getting on SSI that it is difficult to get them to focus on anything else. Yet another reason for the low use of noncash service is that some of the county TANF officials we interviewed expressed some uncertainty as to how to best serve their adult TANF recipients with impairments, explaining that they are sending mixed signals when it comes to encouraging work. One county TANF official we interviewed said that on one hand, recipients are being told about using TANF services to obtain employment, and then, on the other hand, recipients are being told to apply for SSI benefits, which require an applicant to focus on his or her inability to work. Some TANF offices also allow TANF recipients with impairments to count applying for SSI as a work activity. Estimates from our survey showed that about 30 percent of county TANF offices reported that they consider the SSI application process an activity that satisfies the work requirement. Also, another county official we interviewed stated that if a client goes into an exempted status, the client must participate in at least one activity a week, but not necessarily a work activity. It can be any service the TANF office has to offer, including physical therapy or assistance in completing the SSI application. Some county TANF offices have developed interactions with SSA offices, but such interactions have been of a limited nature and have focused on the SSI application process. Estimates from our survey indicated that some TANF offices have some form of interaction with SSA. Estimates from our survey also showed that two frequently reported forms of interaction between county TANF offices and SSA include having a contact at SSA with whom to discuss cases and following up with SSA regarding applications for SSI. In describing his office’s interactions with SSA, one state TANF official we interviewed said that his office, SSA, and DDS have a good working relationship, which includes cross training between the agencies and discussions concerning the SSI application process. However, estimates from our survey showed about 95 percent of county TANF offices reported that they would like to develop a relationship, or improve their relationship, with their local SSA field office with regard to adult TANF recipients applying for SSI. One state TANF official that we interviewed said that his office does not have much of a relationship with SSA. He noted that he had no contacts within SSA but would like to develop a formal relationship with DDS so that they could make faster determinations for the deferred TANF caseload. A county TANF official we interviewed said that her office’s communication with SSA is largely one-sided. This TANF official explained that even though her office sends documentation that supports a recipient’s SSI application, SSA does not inform them of any eligibility decisions it makes with TANF applicants. As a result, TANF staff must rely on their recipients telling them about decisions or on a computer system that indicates if an individual is receiving benefits. Finally, in all of the states we visited, TANF officials told us that they interact with SSA to assist their TANF recipients with impairments get onto SSI. Estimates from our survey also showed that 64 percent of counties reported that their interactions were TANF officials following up with SSA regarding a recipient’s SSI application, and 53 percent reported having a contact at SSA to discuss cases. TANF offices identified a number of ways they would like to improve interactions with SSA, but most of these focused on making the SSI application process more efficient and not on working together to assist TANF recipients with impairments toward employment and self- sufficiency. Estimates from our survey showed about 57 percent of the county TANF offices said that they would like to receive training from SSA regarding the SSI application process and eligibility requirements, 50 percent said they would like to have a contact at SSA with whom to discuss cases, and 41 percent said they would like to have regular meetings or working groups with SSA regarding interactions and other issues related to serving low-income individuals with impairments. In addition, one TANF official we interviewed would like interactions with SSA to be improved and thinks they could be if he knew what DDS was looking for in the application process, such as what it requires for evidence. In contrast, only 6 percent of county TANF offices reported that they would like to improve interactions with SSA specifically related to providing SSA with information on employment-related services received while on TANF. Although TANF offices reported an interest in developing a close working relationship with SSA, based on their interactions with SSA, some state and county TANF officials believed that they had to take the lead in developing these relationships. For example, one TANF official we interviewed explained that he had attempted to make contact with SSA to discuss a potential partnership and address some of the county’s issues with the SSI application process but received no response. The county official then wrote a letter to a top SSA regional official asking about partnering opportunities. In response, the regional official instructed the SSA area director, along with the local SSA and state DDS office, to meet with county officials. One SSA headquarters official we interviewed told us there is no SSA policy that directs or encourages their field offices to interact with TANF offices. The official also told us that SSA would consider such a partnership with TANF offices but would want assurances of what the benefits would be for SSA. In addition, the official said that the agency does not want to start up a partnership that would overly tax its already high workloads. The official further said that if it were to develop a relationship with TANF offices, SSA would then have to develop a training program and then administer it to all operations personnel. The official noted that developing and administering such a training program would not be a small task. SSA officials did state that if a TANF office makes a request for training sessions, SSA would be willing to provide training on the application process. However, about 27 percent of county TANF offices reported that they were discouraged in their attempts to establish a relationship with SSA because the local SSA field office told the TANF office that SSA did not have the time or the interest. While officials at SSA headquarters stated that they are largely unaware of any partnerships or interactions between TANF offices and local SSA field offices, some local SSA officials have found such relationships beneficial. In particular, one SSA official has found his office’s relationship with the local TANF office to be a form of outreach for SSA by helping his office identify people who would qualify for SSI. He explained that his local SSA office does not always have the time or staff to conduct outreach. He further explained that TANF case managers can explain the benefits and provide assistance to the TANF recipient applying for SSI. Thus, when a letter comes from the DDS that initially denies the claim, the individual is less likely to throw it away, as he or she is more aware of the process. This could save SSA time and money as the applicant knows that he or she must appeal within a certain amount of time, thereby reducing the need to start over because of missed deadlines. While 34 percent of those county TANF offices that provide services to recipients awaiting SSI eligibility determinations reported interacting with SSA in some manner to serve adult TANF recipients with impairments, a much higher proportion reported receiving assistance from other agencies or programs. For example, as table 3 shows, 91 percent of county TANF offices reported that at least some of their recipients awaiting SSI determinations received assistance from the state vocational rehabilitation agencies, and 86 percent of all offices reported that at least some of their recipients received assistance from the state or local mental health agency. Further, in all of the states we visited, TANF offices reported working with other agencies, such as the Department of Education and the Department of Labor, to help TANF recipients with impairments find work. With the new emphasis on work and self-sufficiency taken by TANF and SSI, and the overlap in the populations served by both programs, opportunities exist to improve the way these two programs interact in order to help individuals with impairments become more self-sufficient. While some interactions between TANF offices and SSA do exist, they are often limited to how best to assist a TANF recipient with impairments become eligible for essentially permanent cash benefits under SSI. Moreover, the practice by most TANF offices of exempting individuals from work requirements while awaiting SSI eligibility determination, as well as SSA’s policy of offering return-to-work services and incentives only after a lengthy eligibility process, undermines both programs’ stated goals of promoting self-sufficiency. In addition, this practice runs counter to the disability management literature that has emphasized that the longer an individual with an impairment remains out of the workforce the less likely the individual is to ever return to work. In recognition of this, SSA is planning demonstration projects that will test alternative ways to provide benefits and employment supports to DI applicants. However, TANF recipients with impairments, because of their low income and assets, are more likely to apply and qualify for SSI. Moreover, TANF recipients with impairments often receive assessments of their conditions and capacity to work while on TANF. Since SSA cannot easily identify who among its applicants are TANF recipients, SSA is also unable to systematically identify the types of services that the SSI applicant may have received through TANF or know whether the SSI applicant has been assessed as having the capacity to work or not. Being able to identify the receipt of TANF benefits, as well as the noncash services received through TANF, may help SSA accomplish its mission of promoting the employment of beneficiaries with impairments. By sharing information and establishing better working relationships with TANF agencies, SSA could identify, among its applicants who are or were TANF recipients, those individuals capable of working and could then target them for employment-related services and help them achieve self-sufficiency or at least reduce their dependency on cash benefits. Although the disconnect in work requirements between TANF and SSA’s disability programs and the timing of when employment-related services are provided to SSI recipients could be barriers to establishing a continuity of services, the earlier provision of employment-related services, as part of a demonstration project, could mitigate these potential barriers. While some county TANF officials we interviewed have developed working relationships with their local SSA office, other counties have not or may be unaware of the possibilities for interactions with SSA and how to go about establishing these relationships. Sharing best practices about how TANF agencies can distinguish, among the recipients they have referred to SSI, those individuals without the capacity to work from those with the capacity to work and who could benefit from employment-related services could help ensure that those individuals with work capacity be given the assistance they need to help them obtain employment. Moreover, sharing best practices for establishing useful interactions with SSA could help ensure that employment-related services could continue after the person becomes eligible for SSI. To help individuals with impairments become more self-sufficient and to address the gap in continuous work services between the TANF and SSI programs, we are recommending that SSA, as part of a new demonstration project, work with TANF offices to develop screening tools, assessments, or other data that would identify those TANF recipients with impairments who while potentially eligible for SSI may also be capable of working. Once these recipients have been identified, the TANF offices and SSA could work together to coordinate aggressive medical care and employment-related services that would help the individual obtain employment and achieve or at least increase self-sufficiency. In order to facilitate and encourage a sharing of information among TANF offices regarding the development of interactions with SSA that might increase self-sufficiency of recipients with impairments, we are recommending that HHS provide space on its Web site to serve as a clearinghouse for information regarding best practices and opportunities for TANF agencies to interact with SSA. This would allow state and county TANF officials to share information on what they are doing, what works, and how to go about establishing relationships with SSA. It would also provide states and counties with access to the research of federal agencies, state and county offices, and other researchers that they may need in order to develop a strong functional relationship with SSA and help TANF recipients with impairments move toward economic independence. HHS should be able to minimize its work and expense by using its Web site to share this information. We provided a draft of this report to HHS and SSA for comment. Both agencies generally agreed with our recommendations and indicated that they look forward to working together to help low-income individuals with impairments become more self-sufficient. Specifically, SSA stated that it would be pleased to work with HHS on the planning and design of a demonstration project. Likewise, HHS stated that it would be pleased to have its staff work with SSA to develop a process or criteria for identifying individuals who could benefit from employment services. In addition, in response to the findings of our report, SSA said it would take immediate measures to ensure that it responds to all requests from TANF offices for training on SSA’s programs. Also in its comments, SSA suggested that we include in our report the fact that states may exempt up to 20 percent of their caseload from the time limits and that many states waive work requirements for persons applying for SSI. In both the draft we sent to SSA and the final version, we included a footnote explaining the time limit exemptions, and in the body of the report we discussed the issue of work requirement exemptions for persons applying for SSI. HHS’ comments appear in appendix II and SSA’s comments appear in appendix III. In addition, both HHS and SSA provided technical comments, which we have incorporated as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution until 30 days after the date of this letter. At that time, we will send copies to the Secretary of HHS, the Commissioner of Social Security, appropriate congressional committees, and other interested parties. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me or Carol Dawn Petersen on (202) 512-7215. Other staff who made key contributions are listed in appendix IV. To determine the extent that Temporary Assistance for Needy Families (TANF) recipients with impairments are encouraged to apply for Supplemental Security Income (SSI), whether work requirements are imposed, the range of services provided during the period of SSI eligibility determination, and the extent that interactions exist between the SSI and TANF programs, we conducted a nationally representative survey of 600 county TANF administrators from October 14, 2003, through February 20, 2004. For the most part, TANF services are provided at the county level, so we selected a random probability sample of counties for our survey. We derived a nationwide listing of counties from the U.S. Bureau of the Census’s county-level file with 2000 census data and yearly population estimates for 2001 and 2002. We selected a total sample of 600 counties out of 3,141 counties. To select this sample, we stratified the counties into two groups. The first group consisted of the 100 counties in the United States with the largest populations, using the 2002 estimates. The second group consisted of the remaining counties in the United States. We included all of the 100 counties with the largest populations in our sample to ensure that areas likely to have large concentrations of TANF recipients were represented. From the second group, consisting of all the remaining counties, we selected a random sample of 500 counties. After selecting the sample of counties, we used the American Public Human Services Association’s Public Human Services Directory (2002-2003) to determine the name and address of the TANF administrator for each county. In states with regional TANF programs, we asked the regional director to fill out a questionnaire for each county in the region. We obtained responses from 527 of 600 counties, for an overall response rate of about 88 percent. The responses are weighted to generalize our findings to all county TANF offices nationwide. Sample weights reflect the sample procedure, as well as adjusting for nonresponse. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results at a 95 percent confidence level at an interval of plus or minus 5 percentage points. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. In other words, we are 95 percent confident the confidence interval will include the true value of the study population. In addition to the reported sampling errors, the practical difficulties of conducting any survey may introduce other types of errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the types of people who do not respond can introduce unwanted variability into the survey results. We included steps in both the data collection and data analysis stages for the purpose of mitigating such nonsampling errors. In addition to those named above, David J. Forgosh, Cady Summers, Megan Matselboba, Christopher Moriarity, and Luann Moy made key contributions to this report. | The nation's social welfare system has been transformed into a system emphasizing work and personal responsibility, primarily through the creation of the Temporary Assistance for Needy Families (TANF) block grant. The Supplemental Security Income (SSI) program has expanded policies to help recipients improve self-sufficiency. Given that SSA data indicate an overlap in the populations served by TANF and SSI, and the changes in both programs, this report examines (1) the extent that TANF recipients with impairments are encouraged to apply for SSI and what is known about how SSI caseload growth has been affected by such TANF cases, (2) the extent that work requirements are imposed on TANF recipients applying for SSI, and the range of services provided to such recipients, and (3) the extent that interactions exist between the SSI and TANF programs to assist individuals capable of working to obtain employment. In our nationwide survey of county TANF offices, we found that nearly all offices reported that they refer recipients with impairments to SSI, but the level of encouragement to apply for SSI varies. While almost all of the county TANF offices stated that they advise such recipients with impairments to apply for SSI, 74 percent also follow up to ensure the application process is complete, and 61 percent assist recipients in completing the application. Because TANF offices are referring individuals with impairments to SSI, these referrals will have some effect on the SSI caseload. However, due to data limitations, the magnitude of the effect these referrals have on SSI caseload growth is uncertain. While SSA can identify whether SSI recipients have income from other sources, it cannot easily determine whether this income comes from TANF or some other assistance based on need. In addition, past research has not found conclusive evidence regarding the impact that TANF referrals have on SSI caseload growth. Estimates from our survey found that although some TANF offices impose work requirements on individuals with impairments, about 86 percent of all offices reported that they either sometimes or always exempt adult TANF recipients awaiting SSI determinations from the work requirements. One key reason for not imposing work requirements on these recipients is the existence of state and county TANF policies and practices that allow such exemptions. Nevertheless, county TANF offices, for the most part, are willing to offer noncash services, such as transportation and job training, to adult recipients with impairments who have applied for SSI. However, many recipients do not use these services. This low utilization may be related to exempting individuals from the work requirement, but it may also be due to the recipients' fear of jeopardizing their SSI applications. Another reason for the low utilization of services is that many services are not necessarily available; budgetary constraints have limited the services that some TANF offices are able to offer recipients with impairments. Many county TANF offices' interactions with SSA include either having a contact at SSA to discuss cases or following up with SSA regarding applications for SSI. Interactions that help individuals with impairments increase their self-sufficiency are even more limited. In all the states we visited, we found that such interactions generally existed between TANF agencies and other agencies (such as the Departments of Labor or Education). In addition, 95 percent of county TANF offices reported that their interactions with SSA could be improved. State and county TANF officials feel they have to take the lead in developing and maintaining the interaction with SSA. One SSA headquarters official stated that SSA has no formal policy regarding outreach to TANF offices but would consider a partnership provided there is some benefit for SSA. Still, about 27 percent of county TANF offices reported that they were discouraged in their attempts to establish a relationship with SSA because staff at the local SSA field office told them that they did not have the time or the interest. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Fiscal year 2013 marked the 10th year of the implementation of IPIA, which, as amended, requires executive agencies to identify programs and activities susceptible to significant improper payments, estimate the amount of improper payments in susceptible programs and activities, and report these improper payment estimates, including root causes, and the actions taken to reduce them. In response to these requirements, executive agencies, including DOD and HHS, annually report improper payment estimates and improper payment rates for certain programs in their AFRs. (See fig.1.) DHA uses private sector contractors—referred to as TRICARE purchased care contractors (TPCC)—to develop and maintain the private health care provider networks that make up the purchased care system, as well as process and pay claims. The TPCCs include three Managed Care Support Contractors (MCSC) that manage health care networks for most TRICARE benefits in the United States, one contractor to manage overseas claims, one contractor to manage TRICARE’s supplemental Medicare coverage program, one contractor to manage the pharmacy benefit, and three contractors to manage dental benefits. Under TRICARE, private providers or TRICARE enrollees submit claims to TPCCs who, on behalf of TRICARE, are responsible for adjudicating and paying the claims according to established policies and procedures. TPCCs subject claims to automatic edits to ensure accuracy and determine how the claims will be adjudicated—either paid or denied. For example, automated edits compare claim information to TRICARE requirements in order to approve or deny claims or to flag them for additional review. TPCCs also conduct more in-depth reviews of certain claims prior to payment.in fiscal year 2013. Medicare providers submit claims to Medicare Administrative Contractors (MAC), which are responsible for processing and paying these claims, among other activities. MACs subject claims to automatic prepayment edits to ensure accuracy, much like the automated edits in the TRICARE purchased care program. For example, some prepayment edits are related to service coverage and payment, while others verify that the claim submissions contain needed information, that providers are enrolled in Medicare, and that patients are eligible Medicare beneficiaries. The MACs processed 1.2 billion Medicare claims in 2013. DHA and CMS also subject a portion of TRICARE and Medicare claims to postpayment review by contractors to identify and recoup improperly paid claims. Most private health insurers also conduct postpayment reviews to identify improper payments, according to organizations we spoke to with knowledge of claims review practices. Multiple review methodologies exist depending on the objective of the review, but many require examination of the underlying medical record. For example, reviews examine the underlying patient medical record to validate that accurate codes were used, that services were rendered as the physician directed, were medically necessary, and were properly documented. The HHS-OIG, which carries out Medicare program integrity activities, uses medical record reviews to determine the scope of improper payments in targeted reviews of specific service types. HHS-OIG officials have stated that by reviewing medical records and other documentation associated with a claim, they can identify services that are undocumented, medically unnecessary, or incorrectly coded, as well as duplicate payments and payments for services that were not provided. For example, the HHS-OIG found that 61 percent of power wheelchairs provided to Medicare beneficiaries in the first half of 2007 were medically unnecessary or had claims that lacked sufficient documentation to determine medical necessity, which accounted for $95 million in improper Medicare payments. IPIA, as amended, requires federal executive branch agencies to (1) review all programs and activities, (2) identify those that may be susceptible to significant improper payments, (3) estimate the annual amount of improper payments for those programs and activities, (4) implement actions to reduce the root causes of improper payments and set reduction targets, and (5) report on the results of addressing the foregoing requirements. In response to these requirements and OMB implementing guidance, agencies generally publicly report their improper payment estimates each November in their AFRs. An improper payment is defined by statute as any payment that should not have been made or that was made in an incorrect amount (including overpayments and underpayments) under statutory, contractual, administrative, or other legally applicable requirements. It includes duplicate payments, and any payment made for an ineligible recipient, an ineligible good or service, a good or service not received (except for such payments where authorized by law), and any payment that does not account for credit for applicable discounts. agencies to report as improper payments any payments for which insufficient or no documentation is found. IPIA, § 2(g)(2) (codified, as amended, at 31 U.S.C. § 3321 note). for improper payments.guidance is because of the variation in how federal programs operate. According to OMB officials, the latitude in the Although OMB’s implementation guidance allows such variation, several federal programs which pay for services based on claims submitted by beneficiaries or providers, including Medicare, examined the underlying documentation for each of a sample of claims to determine the validity of payments as part of their efforts to estimate improper payments in fiscal For example, CMS’s method for testing payments for errors year 2013. in Medicaid fee-for-service and Children’s Health Insurance Program fee- for-service includes both a claims processing review and medical record review.improper payments were identified through the medical record reviews in fiscal year 2013. Most Medicaid and Children’s Health Insurance Program With respect to IPIA’s required root cause analysis and corrective action reporting, the corrective actions agencies develop depend, in part, on the improper payments identified by their measurement methodology. OMB guidance on corrective actions states that agencies should continuously use their improper payment measurement results to identify new and innovative corrective actions to prevent and reduce improper payments. Internal control standards for the federal government also state that federal agencies should establish policies and procedures to ensure that the findings of audits and other reviews—including the improper payment measurement results—are promptly addressed and corrected. DHA’s approach to measuring improper payments in TRICARE was less comprehensive than that used by CMS for Medicare. Both methodologies evaluate a sample of health care claims paid or denied by the contractors that process program claims. However, while CMS’s methodology examined underlying patient medical records supporting each of the sampled claims, DHA did not evaluate comparable medical record documentation to discern whether each payment was supported. Consequently, TRICARE’s reported improper payment estimates were not comparable to Medicare’s estimates, and likely understated the amount of improper payments in the TRICARE program relative to the estimates produced by Medicare’s more comprehensive measurement methodology. The improper payment measurement methodology that DHA used to estimate the TRICARE improper payments reported in DOD’s fiscal year 2013 AFR was less comprehensive than the measurement methodology CMS used to estimate Medicare improper payments. Specifically, the supporting documentation that DHA’s methodology examined to test whether sampled health care claims were paid properly was less comprehensive than Medicare’s methodology, which examined medical record documentation for each sampled claim. According to DHA and CMS guidance, the agencies also developed their measurement methodologies for different purposes. TRICARE: DHA’s approach to measuring TRICARE improper payments examined whether the TPCCs processed and paid submitted claims according to TRICARE policies. Since 1994, DHA has employed a contractor to conduct postpayment claims reviews for the primary purpose of determining the accuracy of TPCCs’ claims processing and compliance with TRICARE policy, according to DHA’s claims review contractor guidance, and contractor and DHA officials. DHA officials reported that DHA has also aggregated these TPCC-specific compliance reviews to report the national TRICARE improper payment rate, as required by IPIA since 2003. While DHA has changed aspects of the compliance review methodology to meet reporting requirements for statistically significant estimates, and, according to DHA, to reflect legal and contractual changes impacting TRICARE, the basic process of reviewing claims has not changed in 20 years. As a result, DHA continues to only identify improper payments due to contractor compliance problems. To determine the TPCCs’ claims processing performance, the TRICARE claims review contractor examines a sample of paid and denied claim records, including any documentation used by the TPCC to adjudicate the claim. For each of the claims the DHA samples, the TPCC is required to send to the TRICARE claims review contractor copies of the processed claim, the beneficiary’s claim history, and any documentation it used to process the claim. According to DHA and claims review contractor officials, the documentation varies by claim and can include information from the DHA eligibility database or prior authorization and referral forms. DHA and claims review contractor officials reported that medical record documentation is only included in the improper payment claims review if the TPCC conducted a medical review as part of its original claim processing. According to the TRICARE claims review contractor, DHA officials, and the agency’s claims review guidance, the contractor conducts automated and manual reviews of the claim and supporting documentation to verify that the TPCC processed the claim according to TRICARE policy and contract requirements. For example, the claims review contractor uses automated auditing tools to verify the clinical accuracy of procedure codes listed on the claim. It also verifies that the beneficiary and provider were eligible, the claimed services were covered TRICARE benefits, the TPCC calculated correct pricing and cost sharing, and prior authorization and medical necessity were documented when necessary, among other things. If a medical review was conducted by the TPCC, DHA and the TRICARE claims review contractor told us that the contractor does not typically re-evaluate the TPCC’s decision, but only ensures that the documentation exists. Based on a review of DHA’s claims review guidance and statements from DHA and claims review contractor officials, DHA’s improper payment measurement methodology also does not independently validate that the medical records support the diagnosis or procedure codes submitted on the claim. According to DHA guidance, if the TPCC did not provide a copy of the claim or processed the claim incorrectly based on the documentation provided, the claims review contractor will consider some or all of the payment as an error and action is taken to adjust the payments accordingly. DHA guidance provides TPCCs the opportunity to submit additional documentation to support their processing decisions and remove certain errors. After the audit results are finalized, DHA uses the information to calculate improper payment rates for each TPCC and to estimate its national improper payment rate. Medicare: CMS developed the Comprehensive Error Rate Testing (CERT) program to estimate the national Medicare improper payment rate to comply with IPIA, and to monitor payment decisions made by the MACs, according to CMS’s CERT guidance. CMS’s CERT program methodology focuses on compliance with conditions of Medicare’s payment policies by both the provider and MAC. The CERT program targets high-risk aspects of the Medicare program. Specifically, CMS officials told us that because Medicare maintains common shared systems that determine for all MACs whether a provider is enrolled in Medicare, and what the payment rate should be, CMS has deemed these aspects of the claims payment process to be at low risk of improper payments, and they are not examined through CERT. Instead, the CERT program focuses on problems that MACs cannot otherwise identify using automated means, according to CMS officials. CMS has employed contractors to carry out the CERT program since 2003. CMS has reported that the agency has modified the CERT measurement methodology to address identified trends and improve accuracy. CMS’s approach to measuring improper payments involves examining the medical record associated with a stratified random sample of processed Medicare claims to determine whether there is support for the payment, and to assess whether the payment followed Medicare’s coverage, coding, and billing rules. CMS’s CERT guidance specifies that, for each sampled claim, the CERT documentation contractor obtain the medical record and other pertinent documentation from the provider that submitted the claim. If the provider does not provide the medical record and other requested information, the CERT review contractor identifies the payment amount as an error. According to CMS’s CERT guidance and contractor officials, when medical records are received, the contractor’s clinical and coding specialists review the claim and the supporting medical records to assess whether the claim followed Medicare’s payment rules. Claims that do not follow Medicare’s payment rules or claims for which the provider submitted insufficient documentation to determine that the services were provided or medically necessary are classified as an error by the CERT reviewer and action is taken to adjust the payments accordingly. Medicare allows providers whose claims were denied by the CERT review contractor to appeal those claims, and if the error determination for a claim is overturned through the appeals process, the CERT review contractor adjusts the error accordingly. Once all the errors are finalized, the CERT statistical contractor calculates the national error rates. Table 1 compares the purpose of and documentation reviewed by the TRICARE and Medicare improper payment measurement methodologies. Compared to DHA’s methodology, CMS’s CERT methodology of examining underlying medical records to independently verify Medicare claims and payments more completely identifies potential improper payments, such as those caused by provider noncompliance with coding, billing, and payment rules. While DHA’s methodology is designed to identify improper payments resulting from TPCC claims processing compliance errors, it does not comprehensively capture errors that occur at the provider level or errors that can only be identified through an examination of underlying medical record documentation. Table 2 compares examples of the information verified by the TRICARE and Medicare improper payment measurement methodologies. CMS’s CERT methodology identifies certain improper payments that DHA’s TRICARE claims review methodology would not fully identify. Such improper payments accounted for nearly all of the 10.1 percent improper payment rate that CMS reported in fiscal year 2013. For example, differences include: Evidence of medical necessity: As noted, the TRICARE claims review contractor’s medical necessity review is limited to confirming that the TPCC completed a medical review when required and the claim passed certain edits. Consequently, the review contractor may not identify payments for medically unnecessary services for the claims that a TPCC did not previously review. The claims review contractor would also fail to identify if the TPCC made an improper medical necessity determination for those claims that it was required to review because the claims review contractor does not re-review the TPCC’s determination. Conversely, CMS’s CERT methodology identifies such errors. Through the CERT program’s independent medical record review for each sampled claim, CMS has estimated that improper payments related to medically unnecessary services accounted for 2.8 percent of total Medicare payments and 26.6 percent of total improper payments in fiscal year 2013. Verification of correct coding: The TRICARE claims review contractor confirms that the codes used for reimbursement matches the diagnosis claimed and passed coding edits, but does not verify that the medical documentation validates the codes that were billed or diagnosis claimed. As a result, the TRICARE claims review methodology could fail to identify if a provider used, and the TPCC paid for, services based on an incorrect code. CMS’s CERT program identifies such errors and estimated that 1.5 percent of Medicare payments in fiscal year 2013 were improper because of incorrect coding. Such errors accounted for 13.7 percent of total estimated improper payments that year. Documentation of provider services: Since DHA’s claims review methodology does not request documentation from providers, it is unclear whether TRICARE providers maintain the required documentation to support the services they claim. In contrast, CMS estimated that 6.1 percent of Medicare payments were improper in fiscal year 2013 because of insufficient documentation, which accounted for 56.8 percent of total estimated improper payments. That is, the provider submitted some documentation, but the CERT reviewer could not conclude that some of the allowed services were actually provided at the level billed or were medically necessary. In addition, “no documentation” errors—where the provider submitted none of the requested medical records—accounted for 0.2 percent of Medicare payments or 1.4 percent of total improper payments in fiscal year 2013. DHA officials reported that TRICARE has other postpayment mechanisms in place to examine medical records and thus identify the types of improper payments that the TRICARE claims review program does not. However, the results of the other mechanisms are not reflected in the estimated improper payment rates that DHA reports. For example, DHA conducts quality monitoring reviews that analyze medical record documentation and identify problems such as paid services that were not medically necessary. DHA policy also requires the TPCCs to conduct quarterly internal reviews of a sample of medical records to determine the medical necessity of care provided, and determine if the diagnostic and procedural information of the patient—as reported on the claim—matches the physician’s description of care and services documented in the medical record. However, the potential problems identified by these reviews are not considered or publicly reported as improper payments in the DOD’s AFR. Due to the fundamental differences in DHA’s and CMS’s approaches to measuring improper payments, reported improper payment rates for TRICARE and Medicare are not comparable. By not examining underlying medical record documentation to discern if payments for claims are proper, DHA is likely not identifying all types of improper payments in TRICARE, and thus understating the rate of improper payments. OMB’s IPIA implementation guidance does not specifically dictate how agencies should test for improper payments. However, Medicare and certain other federal claims-based programs conduct more comprehensive reviews that include examination of the underlying documentation for each sampled claim to determine the validity of payment as part of their efforts to estimate improper payments under IPIA. The HHS-OIG and most of the organizations with knowledge of health care claims review practices that we spoke with also acknowledge that reviewing the underlying medical records is needed to verify appropriate payment. The root causes and related corrective actions that DHA reported in DOD’s fiscal year 2013 AFR are limited to addressing issues of contractor noncompliance with claims processing requirements. For example, DHA reported the following root causes for the 0.3 percent errors it found to be improper: incorrect pricing for medical procedures and equipment (47 percent), missing authorization or pre-authorization (14 percent), and cost sharing or deductible miscalculations (11 percent). These categories are largely processing errors that reflect DHA’s approach to identifying errors, and do not address underlying causes of improper payments not related to contractor compliance, such as errors made by providers who may not fully understand or comply with DHA policies. DHA cannot fully identify provider-level improper payment errors without reviews of the paperwork submitted by providers, including reviews of underlying medical records. DHA’s one corrective action for TRICARE for the past three fiscal years— to incentivize payment accuracy through contract bonuses and penalties based on audit results—may be a good method to promote contractor compliance, but it will not address providers’ noncompliance with billing rules. DHA officials said that they have not changed or added to the corrective action plan in at least three fiscal years because contract requirements are still in place to financially incentivize contractors to process health care claims correctly. Although DHA could include other corrective actions, the current approach only addresses improper payments caused by contractors’ claims processing errors. Under the IPIA, as amended, and implementing guidance, agencies are to identify program weaknesses, make improvements, and reduce future improper payments. Our prior work has found that DHA missed opportunities to prevent future improper payments; for example, in a May 2013 report examining IPIA compliance throughout DOD, we found that DOD did not adequately implement key IPIA provisions and OMB requirements for fiscal year 2011. We recommended that DOD’s corrective action plans be developed using best practices to ensure that root causes are addressed, improper payments reduced, and federal dollars protected. A senior DOD official told us that the agency planned to implement this recommendation by November 15, 2014; however, DHA cannot address these recommendations with respect to TRICARE until it has identified improper payments using a measurement methodology that goes beyond contractor compliance issues. CMS, by comparison, reported more detailed and constructive information about the 10.1 percent of Medicare payments it reported as improper in HHS’s fiscal year 2013 AFR. In addition to describing the types of errors that most frequently led to improper payments, CMS also provided contextual information about specific factors that contributed to the errors. For example, CMS reported that some improper payments were made for services that, while clinically appropriate, could be provided in less intensive settings and therefore did not meet Medicare’s medical necessity requirements. CMS also identified the provider types that contributed most substantially to each type of improper payment. For example, hospitals contributed substantially to medical necessity errors. CMS’s multiple corrective actions are more detailed and clearly tied to reported root causes of Medicare improper payments than TRICARE’s. For example, CMS is expanding the Medicare Recovery Audit Contractor program to allow prepayment reviews of certain types of claims with historically high amounts of improper payments, therefore preventing improper payments from being made in the first place; and implementing two policies pertaining to inpatient hospital claims that will specifically address the identified root cause of care being provided in inappropriately intensive settings. CMS provides MACs with contract-specific root causes of improper payment data on a quarterly basis. These data are used by MACs to update their corrective actions quarterly.actions allow CMS and its contractors to tailor efforts to address specific root causes of errors, and review its plans for reducing errors using Quarterly updates to corrective measurable targets, which help the agency know when it has made progress in addressing program weaknesses. In addition to reporting root causes and corrective actions in the AFR, CMS uses its Medicare improper payment results to address the agency’s stated goal of reducing Medicare improper payments due to programmatic weaknesses. For example, CMS annually develops and reports a more detailed analysis of improper payment findings than is provided in the AFR by providing specific examples of areas identified as particularly vulnerable to improper payments, analysis of root causes of those errors, and detailed error information by service and provider; and undertakes program-wide action to address improper payment findings. For example, after finding that durable medical equipment suppliers contributed substantially to insufficient documentation errors, CMS began a prior authorization demonstration in seven states to reduce improper payments for power mobility devices. In comparison with DHA, CMS has a more comprehensive approach to identifying Medicare improper payments and root causes, and addresses those weaknesses through its corrective actions. Without a more comprehensive approach, DHA will be limited in its ability to address the causes of improper payments in the TRICARE program. The extent of improper payments identified by agencies depends, in part, on how they test their program components for errors. TRICARE and Medicare are at similar risk for improper payments because both health care programs pay providers on a fee-for-service basis, the programs’ providers overlap, both programs depend on contractors to process and pay claims, and TRICARE uses some of Medicare’s coverage and payment policies. However, DHA does not have as robust an approach to measuring improper payments in the TRICARE program as CMS has for the Medicare program. Specifically, DHA does not routinely examine medical record documentation in its approach to measuring TRICARE improper payments. While DHA has other reviews in place that analyze medical record documentation and could be leveraged to more comprehensively identify improper payments, the results of those reviews are not considered or reported as improper payments. This may account for why the reported improper payment rate for TRICARE is less than 1 percent while the reported rate for Medicare is 10 percent. Although TRICARE is a smaller program compared to Medicare, it still costs the government a significant amount of money—about $21 billion in fiscal year 2013 for the purchased care portion of TRICARE—and DOD has determined TRICARE to be susceptible to significant improper payments under IPIA, as amended. Without a robust measure of improper payment rates in the TRICARE program, DHA cannot effectively identify root causes and take steps to address practices that contribute to improper payments and excess spending. To better assess and address the full extent of improper payments in the TRICARE program, we recommend that the Secretary of Defense direct the Assistant Secretary of Defense (Health Affairs) to take the following two actions: 1. implement a more comprehensive TRICARE improper payment measurement methodology that includes medical record reviews, as done in other parts of its existing postpayment claims review programs; and 2. once a more comprehensive improper payment methodology is implemented, develop more robust corrective action plans that address underlying causes of improper payments, as determined by the medical record reviews. We provided a draft of this report to DOD and HHS for comment. In its written comments, reproduced in appendix I, DOD concurred with our recommendations. DOD also outlined the steps the department will take prior to implementation, including conducting discussions within the department; developing implementation plans; and hiring or contracting for the needed workforce to begin implementing the recommendations. DOD noted that taking these steps would take time. Given the potentially high cost of improper payments, we believe DOD should move expeditiously. HHS had no comments on the report. We are sending copies of this report to appropriate congressional committees, the Secretary of Defense, the Assistant Secretary of Defense (Health Affairs), the Secretary of Health and Human Services, the Administrator of CMS, and other interested parties. The report also will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, Lori Achman, Assistant Director; Rebecca Abela; Drew Long; Dawn Nelson; and Jennifer Whitworth made key contributions to this work. Improper Payments: Government-Wide Estimates and Reduction Strategies. GAO-14-737T. Washington, D.C.: July 9, 2014. Medicare: Further Action Could Improve Improper Payment Prevention and Recoupment Efforts. GAO-14-619T. Washington, D.C.: May 20, 2014. Medicare Program Integrity: Contractors Reported Generating Savings, but CMS Could Improve Its Oversight. GAO-14-111. Washington, D.C.: October 25, 2013. Medicare Program Integrity: Increasing Consistency of Contractor Requirements May Improve Administrative Efficiency. GAO-13-522. Washington, D.C.: July 23, 2013. DOD Financial Management: Significant Improvements Needed in Efforts to Address Improper Payment Requirements. GAO-13-227. Washington, D.C.: May 13, 2013. Medicare Program Integrity: Few Payments in 2011 Exceeded Limits under One Kind of Prepayment Control, but Reassessing Limits Could Be Helpful. GAO-13-430. Washington, D.C.: May 9, 2013. Medicaid: Enhancements Needed for Improper Payments Reporting and Related Corrective Action Monitoring. GAO-13-229. Washington, D.C.: March 29, 2013. Medicare Program Integrity: Greater Prepayment Control Efforts Could Increase Savings and Better Ensure Proper Payment. GAO-13-102. Washington, D.C.: November 13, 2012. Program Integrity: Further Action Needed to Address Vulnerabilities in Medicaid and Medicare Programs. GAO-12-803T. Washington, D.C.: June 7, 2012. Improper Payments: Remaining Challenges and Strategies for Governmentwide Reduction Efforts. GAO-12-573T. Washington, D.C.: March 28, 2012. Improper Payments: Moving Forward with Governmentwide Reduction Strategies. GAO-12-405T. Washington, D.C.: February 7, 2012. Medicare Integrity Program: CMS Used Increased Funding for New Activities but Could Improve Measurement of Program Effectiveness. GAO-11-592. Washington, D.C.: July 29, 2011. Improper Payments: Reported Medicare Estimates and Key Remediation Strategies. GAO-11-842T. Washington, D.C.: July 28, 2011. Improper Payments: Progress Made but Challenges Remain in Estimating and Reducing Improper Payments. GAO-09-628T. Washington, D.C.: April 22, 2009. Improper Payments: Status of Agencies’ Efforts to Address Improper Payment and Recovery Auditing Requirements. GAO-08-438T. Washington, D.C.: January 31, 2008. Improper Payments: Federal Executive Branch Agencies’ Fiscal Year 2007 Improper Payment Estimate Reporting. GAO-08-377R. Washington, D.C.: January 23, 2008. Medicare: Improvements Needed to Address Improper Payments for Medical Equipment and Supplies. GAO-07-59. Washington, D.C.: January 31, 2007. Strategies to Manage Improper Payments: Learning From Public and Private Sector Organizations. GAO-02-69G. Washington, D.C.: October 2001. | Improper payments—payments that were made in an incorrect amount or should not have been made at all—are a contributor to excess health care costs. For programs identified as susceptible to significant improper payments, federal agencies are required to annually report estimates of improper payments, their root causes, and corrective actions to address them. In fiscal year 2013, DOD spent about $21 billion for TRICARE and estimated improper payments of $68 million, or an error rate of 0.3 percent. That year, HHS estimated that $36 billion, or 10.1 percent, of the total $357 billion in Medicare payments were improper. GAO was mandated to examine improper payments in TRICARE and Medicare. This report addresses (1) TRICARE and Medicare improper payment measurement comparability; and (2) the extent to which each program identifies root causes of, and develops corrective actions to address, improper payments. GAO examined DHA and CMS documentation related to improper payment measurement and corrective actions, reviewed relevant laws and guidance, and interviewed agency officials and contractors. The Defense Health Agency (DHA), the agency within the Department of Defense (DOD) responsible for administering the military health program known as TRICARE, uses a methodology for measuring TRICARE improper payments that is less comprehensive than the methodology used to measure improper payments in Medicare, the federal health care program for the elderly and certain disabled individuals. Both methodologies evaluate a sample of health care claims paid or denied by the contractors that process the programs' claims. However, DHA's methodology only examines the claims processing performance of the contractors that process TRICARE's purchased care claims. Unlike Medicare, DHA does not examine the underlying medical record documentation to discern whether each sampled payment was supported. Without examining the medical record, DHA does not verify the medical necessity of services provided. The agency also does not validate that the diagnostic and procedural information reported on the claim matches the care and services documented in the medical record. Comparatively, the Department of Health and Human Services' (HHS) Centers for Medicare & Medicaid Services' (CMS) approach to measuring Medicare improper payments examines medical records associated with a sample of claims to verify support for the payment. This methodology more completely identifies improper payments beyond those resulting from claim processing errors, such as those related to provider noncompliance with coding, billing, and payment rules. By not examining medical record documentation to discern if payments are proper, TRICARE's reported improper payment estimates are not comparable to Medicare's estimates, and likely understate the amount of improper payments relative to the estimates produced by Medicare's more comprehensive methodology. The root causes of TRICARE improper payments and related corrective actions that DHA has identified are limited to addressing issues of contractor noncompliance with claims processing requirements, and are less comprehensive than the corrective actions identified by CMS. For example, DHA has identified the same single corrective action for each of the last three fiscal years to promote contractor compliance, but it only addresses improper payments caused by contractors' claims processing errors. CMS, by comparison, reports more comprehensive information about root causes of improper Medicare payments, develops corrective actions that more directly address root causes, and uses the information to address the agency's goal of reducing future improper payments. For example, for fiscal year 2013, CMS determined that some payments were improper because the services could have been provided in less intensive settings and CMS subsequently implemented two policies to address the problem. In contrast, DHA's less comprehensive approach limits its ability to address the causes of improper payments in the TRICARE program. DOD should implement more comprehensive TRICARE improper payment measurement methods that include medical record reviews, and develop more robust corrective action plans. DOD concurred with GAO's recommendations and identified steps the department will need to take for implementation. HHS had no comments on the report. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Information security is a critical consideration for any organization reliant on information technology (IT) and especially important for government agencies, such as NARA, where maintaining the public’s trust is essential. The dramatic expansion in computer interconnectivity and the rapid increase in the use of the Internet have changed the way our government, the nation, and much of the world communicate and conduct business. Although this expansion has created many benefits for agencies in achieving their missions and providing information to the public, it also exposes federal networks and systems to various threats. Without proper safeguards, systems are unprotected from attempts by individuals and groups with malicious intent to intrude and use the access to obtain sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks. This concern is well-founded for a number of reasons, including the dramatic increase in reports of security incidents, the ease of obtaining and using hacking tools, the steady advance in the sophistication and effectiveness of attack technology, and the dire warnings of new and more destructive attacks to come. Over the past few years, federal agencies have reported an increasing number of security incidents, many of which involved sensitive information that has been lost or stolen, including personally identifiable information, which has exposed millions of Americans to the loss of privacy, identity theft, and other financial crimes. NARA is the nation’s record keeper. It was created by statute as an independent agency in 1934. On July 1, 1949, the Federal Property and Administrative Services Act transferred the National Archives to the General Services Administration, and its name was changed to National Archives and Records Services. It attained independence again as an agency in October 1984 (effective April 1, 1985) and became known as the National Archives and Records Administration. NARA’s mission is to ensure continuing access to essential documentation of the rights of American citizens and the actions of their government. NARA also publishes the Federal Register, stores classified materials, and plays a role in the declassification of these classified records. The Archivist of the United States is NARA’s chief administrator and has responsibilities that include providing federal agencies with guidance and assistance for records management and establishing standards for records retention. The Archivist also has overall responsibility for ensuring the confidentiality, integrity, and availability of the information and information systems that support the agency and its operations. The Assistant Archivist for Information Services has the responsibilities of NARA’s Chief Information Officer. In fiscal year 2009, NARA’s appropriation was about $459 million, while its fiscal year 2010 appropriation is about $470 million. NARA is composed of six major divisions (see table 1) that include 44 facilities such as the headquarters locations in Washington, D.C., and College Park, Maryland; presidential libraries; and regional archives nationwide. NARA depends on a number of key information systems to conduct its daily business functions and support its mission. These systems include networks, telecommunications, and specific applications. As of fiscal year 2009, NARA reported having 39 IT systems and 4 externally hosted systems. According to NARA, as part of its key transformation initiative, in 2001 the agency responded to the challenge of preserving, managing, and assessing electronic records by beginning the development of the modern Electronic Records Archives (ERA) system. This major information system is intended to preserve and provide access to massive volumes of all types and formats of electronic records, independent of their original hardware or software. NARA plans for the system to manage the entire life cycle of electronic records, from their ingestion through preservation and dissemination to customers. We have previously made numerous recommendations to NARA to improve its acquisition and monitoring of the system. Table 2 lists examples of key NARA systems. The Office of Information Services at the Archives II facility provides centralized management and control of NARA’s IT resources and services, including NARANET, the primary general support system of NARA. As shown in figure 1, NARANET is centrally located at Archives II and connects to other government and academic entities. NARANET is extended to field sites via a private network, operated by a service provider. In addition, at locations where the public has research access, NARA provides access to the Internet through the use of public access computers. The Federal Information Security Management Act of 2002 (FISMA) requires each federal agency to develop, document, and implement an requires each federal agency to develop, document, and implement an agencywide information security program to provide security for the agencywide information security program to provide security for the information and information systems that support the operations and information and information systems that support the operations and assets of the agency, including those provided or managed by other assets of the agency, including those provided or managed by other agencies, contractors, or other sources. FISMA requires the Chief agencies, contractors, or other sources. FISMA requires the Chief Information Officer or comparable official at federal agencies to be Information Officer or comparable official at federal agencies to be responsible for developing and maintaining an information security responsible for developing and maintaining an information security program. program. The Office of Information Services centrally administers NARA’s IT The Office of Information Services centrally administers NARA’s IT security program at the Archives II facility. The Assistant Archivist for security program at the Archives II facility. The Assistant Archivist for Information Services, who also serves as the Chief Information Officer Information Services, who also serves as the Chief Information Officer (CIO), is the head of the Office of Information Services. As described in (CIO), is the head of the Office of Information Services. As described in table 3, NARA has designated certain senior managers or divisions at table 3, NARA has designated certain senior managers or divisions at headquarters to fill the key roles in IT security designated by FISMA and headquarters to fill the key roles in IT security designated by FISMA and agency policy. agency policy. FISMA also requires the National Institute of Standards and Technology (NIST) to provide standards and guidance to agencies on information security. NARA has a directive in place to establish its policy and guidance for information security, delineate its security program structure, and assign security responsibilities. NARA has taken steps to safeguard the information and systems that support its mission. For example, it has developed a policy for granting or denying access rights to its resources, employs mechanisms to prevent and respond to security breaches, and makes use of encryption technologies to protect sensitive data. However, security control weaknesses pervaded NARA’s systems and networks, thereby jeopardizing the agency’s ability to sufficiently protect the confidentiality, integrity, and availability of its information and systems. These deficiencies include those related to access controls, as well as other controls such as configuration management and segregation of duties. A key reason for these weaknesses is that NARA has not yet fully implemented its agencywide information security program to ensure that controls are appropriately designed and operating effectively. These weaknesses could affect NARA’s ability to collect, process, and store critical information and records, and protect that information from risk of unauthorized use, modification, and disclosure. In addition to access controls, other important controls should be in place to ensure the confidentiality, integrity, and availability of an organization’s information. These controls include policies, procedures, and techniques for securely configuring information systems, sufficiently disposing of media, implementing personnel security, and segregating incompatible duties. Weaknesses in these areas increase the risk of unauthorized use, disclosure, modification, or loss of sensitive information and information systems supporting NARA’s mission. One of the purposes of configuration management is to establish and maintain the integrity of an organization’s work products. It involves identifying and managing security features for all hardware, software, and firmware components of an information system at a given point and systematically controlling changes to that configuration during the system’s life cycle. By implementing configuration management and establishing and maintaining baseline configurations and monitoring changes to these configurations, organizations can better ensure that only authorized applications and programs are placed into operation. NARA policy requires the most restrictive mode possible of the security settings of information technology products. NIST standards state and NARA policy requires system changes to be controlled. Patch management is an additional component of configuration management, and is an important factor in mitigating software vulnerability risks. Up-to-date patch installation can help diminish vulnerabilities associated with flaws in software code. NIST states that organizations should promptly install newly released security relevant patches, service packs, and hot fixes and test them for effectiveness and potential side effects on the organization’s information systems. NARA had not securely configured several of its systems. For example, network configurations were not always restricted in accordance with best practices; additionally, Web applications and operating systems were not always restricted in accordance with NIST guidance. While NARA has maintained and tracked configuration changes for its ERA system, it has not consistently documented the status of those changes. NARA documented, maintained, and tracked approvals for ERA’s system change requests in its meeting minutes as well as in a system for managing those change requests, but the information in meeting minutes and the change repository were inconsistent. For example, change requests agreed to in meeting minutes from October 2009 to March 2010 did not always match those entered in the repository storing those changes. Specifically, some change requests were approved for implementation in the meeting, but were listed in the repository as closed. Others were reflected as being on hold, but were actually listed as canceled in the repository. According to ERA configuration management staff, these inconsistencies exist because the configuration control board status represents a single point in time of each change request. Subsequent changes to the system related to each change request are handled by release management staff. Therefore, the status in the repository will continue to change. Configuration management staff have the responsibility to document updates to changes in status at various points in the process. In addition, NARA had not implemented an effective patch management program for the systems we reviewed. For example, patches had not been consistently applied to critical systems or applications in a timely manner. Specifically, several critical systems had not been patched or were out of date, some of which had known vulnerabilities. Additionally, NARA used out-of-date or unsupported software and products in some instances. As a result of these control deficiencies, increased risk exists that the integrity of NARA systems could be compromised. Media destruction and disposal is key to ensuring confidentiality of information. Media can include magnetic tapes, optical disks (such as compact disks), and hard drives. Organizations safeguard used media to ensure that the information they contain is appropriately controlled. Media that is improperly disposed of can lead to the inappropriate or inadvertent disclosure of an agency’s sensitive information or the personally identifiable information of its employees and customers. NARA uses degaussers to remove sensitive information from hard drives and tapes before reuse or destruction. This equipment should then be certified that it was tested and that it performed correctly. NIST recommends that organizations test sanitization equipment and procedures to verify correct performance. NARA’s policy for protection of media requires that sanitization equipment be tested annually. However, NARA has not always ensured that equipment used for removing sensitive information was tested annually. For example, while the degausser located at one location was certified annually, one at another location was not. Specifically, one degausser was certified on January 2010, while the other had not been certified since July 2008, about 20 months prior to our on-site visit. By not testing and certifying its degausser, NARA has reduced assurance that the equipment is performing according to certified requirements. The greatest harm or disruption to a system comes from the actions, both intentional and unintentional, of individuals. These intentional and unintentional actions can be reduced through the implementation of personnel security controls. According to NIST, personnel security controls help organizations ensure that individuals occupying positions of responsibility (including third-party service providers) are trustworthy and meet established security criteria for those positions. For employees and contractors assigned to work with confidential information, confidentiality, nondisclosure, or security access agreements specify required precautions, acts of unauthorized disclosure, contractual rights, and obligations during employment and after termination. NARA’s security policy for personnel screening states that the type of investigation is based on the sensitivity of the position to be held. NARA conducted the appropriate background investigations for the employees and contractors we reviewed. These individuals also had appropriate nondisclosure agreements signed when applicable to their position. However, at one location contractors had not signed nondisclosure agreements for the ERA system. NARA staff acknowledged the issue and subsequently had the contractors sign the nondisclosure agreements. Segregation of duties refers to the policies, procedures, and organizational structures that help ensure that no single individual can independently control all key aspects of a process or computer-related operation and thereby gain unauthorized access to assets or records. Often, organizations achieve segregation of duties by dividing responsibilities among two or more individuals or organizational groups. This diminishes the likelihood that errors and wrongful acts will go undetected, because the activities of one individual or group will serve as a check on the activities of the other. Effective segregation of duties includes segregating incompatible duties and maintaining formal operating procedures, supervision, and review. Inadequate segregation of duties increases the risk that erroneous or fraudulent transactions could be processed, improper program changes implemented, and computer resources damaged or destroyed. For systems categorized as high or moderate impact, NIST states that incompatible duties should be segregated, such as, by not allowing security personnel who administer system access control functions to administer audit functions. NARA also has a policy requiring segregation of duties. NARA did not always implement effective segregation of duties controls. For example, two staff members were each assigned security and system administration roles and responsibilities, as either a primary or backup for the ERA system (a high impact system). In addition, those individuals had privileges that allowed them to delete logs generated by the system used for auditing and logging security events. According to NARA staff, periodic reviews of the administrators’ access were performed using checklists that require administrators to review each other’s access activities. However at the time of our review, NARA had not documented its oversight process to ensure controls for separation of duties were implemented appropriately. As a result, NARA may face an increased risk that improper program changes or activities could go unnoticed. A key reason for the weaknesses in information security controls intended to protect NARA’s systems is that the agency has not yet fully implemented its agencywide information security program to ensure that controls are effectively established and maintained. FISMA requires each agency to develop, document, and implement an information security program that, among other things, includes periodic assessments of the risk and the magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information and information systems; policies and procedures that (1) are based on risk assessments, (2) cost- effectively reduce risks, (3) ensure that information security is addressed throughout the life cycle of each system, and (4) ensure compliance with applicable requirements; plans for providing adequate information security for networks, facilities, security awareness training to inform personnel of information security risks and of their responsibilities for complying with agency policies and procedures, as well as training personnel with significant security responsibilities for information security; periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, which is to be performed with a frequency depending on risk, but no less than annually, and which includes testing the management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems; a process for planning, implementing, evaluating, and documenting remedial action to address any deficiencies in its information security policies, procedures, or practices; procedures for detecting, reporting, and responding to security incidents; plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. Although NARA has developed and documented a framework for its information security program, key components of the program have not been fully or consistently implemented. In order for agencies to determine what security controls are needed to protect their information resources, they must first identify and assess their information security risks. FIPS publication 199 provides risk-based criteria to identify and categorize information and information systems based on their impact to the organization’s mission. In addition, the Office of Management and Budget (OMB) states that a risk-based approach is required to determine adequate security, and it encourages agencies to consider major risk factors, such as the value of the system or application, threats, vulnerabilities, and the effectiveness of current or proposed safeguards. By increasing awareness of risks, these assessments can generate support for policies and controls. NIST states that organizations should also assess physical security risks to their facilities when they perform required risk assessments of their information systems. Federal standards require that NARA conduct vulnerability risk assessments at least every 3 years for the buildings and facilities we visited. NARA has developed and conducted risk assessments, but has not consistently documented risk or assessed risk in a timely manner at its facilities. For example, NARA had developed risk assessments for all 10 of the systems in our review, but other system documentation for 4 of the 10 systems cited FIPS 199 impact levels that did not match those listed in NARA’s systems inventory. Documents for 3 systems reflected impact ratings higher than those listed in the systems inventory and the fourth one reflected a lower rating. Similarly, while NARA had conducted physical security risk assessments for the sites we reviewed, several had not been conducted within the required 3-year time frame. As a result, NARA may not have assurance that adequate controls are in place to protect its information and information systems. Another key element of an effective information security program is to develop, document, and implement risk-based policies, procedures, and technical standards that govern security over an agency’s computing environment. FISMA requires agencies to develop and implement policies and procedures to support an effective information security program. If properly implemented, policies and procedures should help reduce the risk that could come from unauthorized access or disruption of services. Developing, documenting, and implementing security policies are the primary mechanisms by which management communicates its views and requirements; these policies also serve as the basis for adopting specific procedures and technical controls. NARA has developed information security policies and procedures that are based on NIST guidelines. For example, NARA has developed individual policy documents that address all of the families of controls listed in NIST Special Publication 800-53. To illustrate, NARA has developed information security methodologies that correspond to the controls required by NIST in the areas of access controls, configuration management, contingency planning, and security awareness training. However, NARA’s policies and procedures were not always consistent with NIST guidance. For example, NARA has not always prescribed controls based on the system’s impact. NIST requires organizations to determine their information systems’ impact using the security objectives of confidentiality, integrity, and availability and states that this information system impact level must be determined prior to the consideration of minimum security requirements and the selection of security controls for those information systems. Instead, NARA prescribed controls based on individual security objectives without taking into consideration the predetermined impact level (based on the three security objectives) of an individual system. To illustrate, NARA’s access control policy only specifies controls for systems with moderate or high confidentiality, rather than suggesting controls according to the impact of the system, as determined by all three security objectives. Similarly, NARA’s certification and accreditation and contingency planning methodologies prescribed controls for systems with moderate or high integrity and availability, respectively, and not based on the impact level of the system. As a result, NARA’s policy may not provide the information needed to ensure that appropriate systems controls are selected that protect its information systems. An objective of system security planning is to improve the protection of information technology resources. A system security plan provides an overview of the system’s security requirements and describes the controls that are in place—or planned—to meet those requirements. OMB Circular No. A-130 requires that agencies develop system security plans for major applications and general support systems, and that these plans address policies and procedures for providing management, operational, and technical controls. NIST Special Publication 800-53 states that the security plan should be updated to address changes to the system, its environment of operation, or problems identified during plan implementation or security control assessments. One of the controls recommended by NIST Special Publication 800-53 is the development of an inventory of an information system’s components. This inventory should, among other things, accurately reflect the current information system, be consistent with the authorized boundary of the system, and be available for review. NARA’s Security Architecture Planning Methodology also outlines security responsibilities, including responsibilities for information system owners and information owners to carry out related to system security plans. This methodology in turn mandates the use of baseline controls identified by NIST in Special Publication 800-53. NARA prepared and documented security plans for the 10 systems and networks we reviewed. All system security plans that we reviewed, with the exception of NARANET’s wireless plan, identified management, technical, and operational controls, in accordance with NIST guidance and NARA policy. However, NARA did not always include required controls in its system security plans. For example, 7 of the 13 system security plans reviewed did not include a system component inventory or address where that inventory could be found. In addition, NARA has not updated its badge and access system security plan since 2003, despite replacing the system in 2007. NARA had scheduled to correct this weakness by the end of 2009, but as of September 2010 it had not been corrected. Further, NARA system security plans varied in documenting security roles and responsibilities for key individuals. Some plans were missing one or more assignments for these roles. Specifically, 6 of the 13 plans did not have the required information system owner role identified, and none of the plans reviewed had the information owner role identified or assigned. By not addressing inventory control and assigning key security responsibilities in the system security plan, NARA increases the risk that critical information may not be available to those responsible for implementing system security plans, potentially causing a misapplication of controls to the system. According to FISMA, an agencywide information security program must include security awareness training for agency personnel, contractors, and other users of information systems that support the agency’s operations and assets. This training must cover (1) information security risks associated with users’ activities and (2) users’ responsibilities in complying with agency policies and procedures designed to reduce these risks. FISMA also includes requirements for training personnel with significant responsibilities for information security. In addition, OMB requires that personnel be trained before they are granted access to systems or applications. The training is intended to ensure that personnel are aware of the system or application’s rules, their responsibilities, and their expected behavior. Further, NARA policy requires that managers and users of NARA information systems be made aware of the security risks associated with their activities and of the applicable laws, executive orders, directives, policies, standards, instructions, regulations, or procedures related to the security of NARA information systems. The policy also states that NARA must ensure that personnel are adequately trained to carry out their assigned information security-related duties and responsibilities. NARA has a security awareness training program in place and maintains records of this training in its Learning Management System. Users are required to complete a Web-based course and, after completion, acknowledge they have reviewed and understand their security responsibilities. According to NARA’s fiscal year 2009 FISMA report, the CIO reported that 100 percent of NARA’s employees had received security awareness training. NARA’s Inspector General concurred with this assessment. The CIO also reported that 50 employees had significant security responsibilities, and that all 50, had received specialized training. NARA’s Inspector General reported a higher number stating that 114 employees had significant security responsibilities, and that 83 (73 percent) received specialized training. However, records from NARA’s training system indicated that not all users had both completed the training and acknowledged that they reviewed and understood their security responsibilities in fiscal year 2009. According to NARA’s records, as of August 20, 2009, 563 of 4,536 individuals had completed only the class portion (12 percent) and 369 individuals (8 percent) had completed only the acknowledgment portion (although in many cases had at least started the class portion). Seven hundred and forty-nine individuals (17 percent) had not completed either portion (see fig. 2). According to NARA’s Chief Information Security Officer, limitations in the training tracking system led NARA to give credit for a user interacting with the system in some way, meaning that a user who had at least started the training course received credit for the security awareness training. In addition, records of specialized security training provided by NARA indicated that 115 individuals were required to take specialized security training; of these 115, 48 (42 percent) had no record of taking specialized training. NARA officials stated that these individuals were provided with an alternate form of training to ensure their compliance with FISMA, such as a one-on-one review or an opportunity to review briefing slides. Without an effective method for tracking that employees and contractors fully complete security awareness training, NARA has less assurance that staff are aware of the information security risks and responsibilities associated with their activities. In addition, without ensuring that all employees with specialized security responsibilities receive adequate specialized training, NARA’s ability to implement security measures effectively could be limited. A key element of an information security program is to test and evaluate policies, procedures, and controls to determine whether they are effective and operating as intended. This type of oversight is fundamental because it demonstrates management’s commitment to the security program, reminds employees of their roles and responsibilities, and identifies and mitigates areas of noncompliance and ineffectiveness. FISMA requires that the frequency of tests and evaluations of management, operational, and technical controls be based on risks and occur no less than annually. OMB requires that systems be authorized for processing at least every 3 years. NARA’s policy for testing is consistent with FISMA and requires that certification testing be conducted in support of system authorizations or accreditations. NARA had conducted tests for each of the 10 systems we reviewed; however, it had not sufficiently tested controls for 2 systems. For example, the management and operational controls for 1 system were not tested at least annually. Although NARA tested technical controls and documented test results for that system, it did not test and document the results for the system’s management and operational controls. Another system had not been tested to support its accreditation since 2003. While an annual assessment was conducted in 2009 for that system, NARA’s 2007 security accreditation memorandum stated that certification testing had not been performed. As a result, NARA may have reduced assurance that controls over its information and information systems are adequately implemented and operating as intended. Remedial action plans, also known as plans of action and milestones (POA&M), help agencies identify and assess security weaknesses in information systems, set priorities, and monitor progress in correcting the weaknesses. NIST guidance states that each federal civilian agency must report all incidents and internally document remedial actions and their impact. POA&Ms should be updated to show progress made on current outstanding items and to incorporate the results of the continuous monitoring process. In addition, FISMA and NARA policy require the agency CIO to report annually to the agency head on the effectiveness of the agency information security program, including progress on remedial actions. NARA has implemented a remedial action process to assess and correct security weaknesses. The format for its system-level POA&Ms includes the types of information specified in NIST and OMB guidance, such as a description of the weakness, resources required to mitigate it, scheduled completion date, the review that identified the weakness, and the status of corrective actions (ongoing or completed). Although NARA has developed POA&Ms to address known weaknesses, the agency does not always update these plans or complete remedial actions in a timely manner. For example, a POA&M for a system designed to receive, preserve, and provide access to electronic records is dated December 2008. None of the remedial actions described in this plan were marked as completed as of April 2010. Additionally, 8 of 10 POA&Ms that we assessed contained blank entries or “to be determined” notations for some required information. These 8 did not provide all of the information for resources needed, scheduled completion dates, milestones, or the security review that identified the weakness. In addition, a POA&M maintained by the Office of Information Services did not include information about resources required to correct these weaknesses. This lack of information about resource requirements may inhibit the agency’s efforts to correct the security weaknesses. Outdated and incomplete POA&Ms compromise the ability of the CIO and other NARA officials to track, assess, and report accurately the status of the agency’s information security. Although strong controls may not block all intrusions and misuse, agencies can reduce the risks associated with such events if they take steps to detect and respond to them before significant damage occurs. Accounting for and analyzing security problems and incidents are also effective ways for an agency to improve its understanding of threats and the potential costs of security incidents, and doing so can pinpoint vulnerabilities that need to be addressed so that they are not exploited again. FISMA requires that each federal agency implement an information security program that includes procedures for detecting, reporting, and responding to security incidents. When incidents occur, agencies are to notify the federal information security incident center—the United States Computer Emergency Readiness Team (US-CERT). NARA has an incident response methodology and maintains an incident database with information about the categorization and analysis of incidents. However, NARA was not able to locate all of its weekly reports for incidents and did not consistently apply its criteria for incident categorization. According to the NARA incident response methodology, incidents involving the disclosures of personally identifiable information, even if the disclosure did not involve an IT system, should be categorized under “Investigation” (Category 6). While the records indicate that NARA reported these disclosures to US-CERT, NARA did not list them as Category 6. NARA also categorized many of its computer security incidents inconsistently. Of 640 total incidents, 139 were classified as “Explained Anomaly” (Category 7). According to the NARA incident response methodology, this category is usually reserved for false positives and other explained anomalies. However, NARA classified a number of incidents in this category, even when the incident was not a false positive or could have been placed into another category. For example, NARA experienced site-redirection events—where a user was unwittingly directed to a malicious Web site while trying to access a legitimate site. This is a form of social engineering, which is categorized in the NARA incident response methodology under a separate category (Category 5). In addition, incidents where encrypted laptops were stolen were included in the “Explained Anomaly” category, though the NARA incident response methodology indicates that they should have been placed in Category 1, which indicates that unauthorized access may have occurred. NARA policy requires that staff be assigned and trained for the incident response team. While NARA tracks information security incidents and their resolution, it has not formally tracked training held for incident response. NARA officials have stated that they are in the process of formalizing this training program. Without ensuring that incident response personnel have received appropriate training, NARA’s ability to implement security measures effectively could be limited. Further, without categorizing incidents appropriately, NARA’s ability to analyze incidents for follow-on actions could be diminished, and corrective actions for protecting agency resources may not be taken. Contingency planning is a critical component of information protection. If normal operations are interrupted, network managers must be able to detect, mitigate, and recover from service disruptions while preserving access to vital information. Therefore, a contingency plan details emergency response, backup operations, and disaster recovery for information systems. It is important that these plans be clearly documented, communicated to potentially affected staff, updated to reflect current operations, and regularly tested. Moreover, if contingency planning controls are inadequate, even relatively minor interruptions can result in lost or incorrectly processed data, which can lead to financial losses, expensive recovery efforts, and inaccurate or incomplete information. FISMA requires each agency to develop, document, and implement plans and procedures to ensure continuity of operations for information systems that support the agency’s operations and assets. Both NIST and NARA require that contingency plans be developed and tested for information systems. NARA developed contingency plans for 9 of the 10 systems we reviewed. Further, NARA had tested each of the contingency plans. However, a contingency plan was not developed for a system key to tracking physical records. NARA identified this weakness, but had not corrected it during the time of our review. Although all the systems in our review were tested for contingencies, NARA has less assurance that it can appropriately recover a key system in a timely manner from certain service disruptions. NARA has taken important steps in implementing controls to protect the information and systems that support its mission. However, significant weaknesses in access controls and other information security controls exist that impair its ability to ensure the confidentiality, integrity, and availability of the information and systems supporting its mission. The key reason for many of the weaknesses is that NARA has not yet fully implemented elements of its information security program to ensure that effective controls are established and maintained. Effective implementation of such a program includes establishing appropriate policies and procedures, providing security awareness training, responding to incidents, and ensuring continuity of operations. Ensuring that NARA implements key information security practices and controls also requires effective management oversight and monitoring. However, until NARA implements these controls, it will have limited assurance that its information and information systems are adequately protected against unauthorized access, disclosure, modification, or loss. To help establish an effective information security program for NARA’s information and information systems, we recommend that the Archivist of the United States take the following 11 actions: Update NARA’s system documentation and inventory to reflect accurate FIPS 199 categorizations. Conduct physical security risk assessments of NARA’s buildings and facilities based on facility-level and federal requirements. Revise NARA’s IT security methodologies, including those for access controls, certification and accreditation, and contingency planning, to include NIST’s minimum system control requirements. Include inventory information and roles and responsibilities assignments in system security plans. Improve NARA’s training process to ensure that all required personnel meet security awareness training requirements. Implement a process that ensures all required NARA personnel with significant security responsibilities meet specialized training requirements. Test management, operational, and technical controls for all systems at least annually. Conduct certification testing when authorizing systems to operate. Update remedial action plans in a timely manner and include required resources necessary for mitigating weaknesses, scheduled completion dates, milestones, and how weaknesses were identified. Improve the incident tracking process to ensure that incidents are appropriately categorized and that personnel responsible for tracking and reporting incidents are trained. Develop a contingency plan for the system that tracks physical records. In a separate upcoming report with limited distribution, we plan to make 213 recommendations to enhance NARA’s access controls to address the 142 weaknesses identified during this audit. In providing written comments on a draft of this report (reprinted in app. III), the Archivist of the United States stated that he was pleased with the positive recognition of NARA’s efforts and that he generally concurred with our recommendations. NARA also provided technical comments, which we have incorporated as appropriate. In addition, the Archivist in his comments disagreed with three of the report’s findings. First, he disagreed that NARA’s risk assessments in its systems inventory were incorrectly applied. However, our finding does not state that the risk assessments were incorrectly applied. Rather, as we discuss in the report, NARA system documentation and system inventories do not consistently reflect the FIPS 199 impact levels of its systems. These inconsistencies may reduce NARA’s assurance that adequate controls are in place to protect its information and information systems. Thus, we continue to believe our finding is appropriate. Secondly, the Archivist disagreed that NARA policies and procedures were not always consistent with NIST guidance. As we discuss in our report, NIST states that an agency must first determine the security category of its information systems and then apply the appropriately tailored set of baseline security controls. However, NARA’s policy prescribed controls based on the individual security objectives of confidentiality, integrity, and availability instead of applying controls based on a prior determination of the system’s impact. We believe that without first identifying the impact of the system, NARA’s policy may not provide the information needed to ensure that appropriate systems controls are selected that protect its information systems. Thus, we continue to believe our finding is valid. Lastly, the Archivist disagreed that the information owner role must be identified in each system security plan. However, NARA’s policy as discussed in the report outlines key individual roles and responsibilities, including the information owner, which should be assigned for each system. By not clearly and consistently assigning these roles, NARA increases the risk that critical information may not be available to those responsible for implementing system security plans. Thus, we continue to believe our finding is valid. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies of this report to interested congressional committees and to the Archivist of the United States. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact Gregory C. Wilshusen at (202) 512-6244 or Dr. Nabajyoti Barkakati at (202) 512-4499. We can also be reached by e-mail at [email protected] or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. The objective of our review was to determine whether the National Archives and Records Administration (NARA) has effectively implemented appropriate information security controls to protect the confidentiality, integrity, and availability of the information and systems that support its mission. To determine the effectiveness of security controls, we gained an understanding of the overall network control environment, identified interconnectivity and control points, and examined controls for NARA’s networks and facilities. Using our Federal Information System Controls Audit Manual which contains guidance for reviewing information system controls that affect the confidentiality, integrity, and availability of computerized information; National Security Agency guidance; National Institute of Standards and Technology (NIST) standards and guidance; and NARA’s policies, procedures, practices, and standards, we evaluated these controls by reviewing network access paths to determine if boundaries were adequately protected; reviewing the complexity and expiration of password settings to determine if password management was enforced; analyzing users’ system authorizations to determine whether they had more permission than necessary to perform their assigned functions; observing methods for providing secure data transmissions across the network to determine whether sensitive data were being encrypted; reviewing software security settings to determine if modifications of sensitive or critical system resources were monitored and logged; observing physical access controls over unclassified and classified areas to determine if computer facilities and resources were being protected from espionage, sabotage, damage, and theft; examining configuration settings and access controls for routers, network management servers, switches, and firewalls; inspecting key servers and workstations to determine if critical patches had been installed and/or were up to date; reviewing media handling policy, procedures, and equipment to determine if sensitive data were cleared from digital media before media were disposed of or reused; reviewing nondisclosure agreements at select locations to determine if they are required for personnel with access to sensitive information; and examining access roles and responsibilities to determine whether incompatible functions were segregated among different individuals. Using the requirements identified by the Federal Information Security Management Act of 2002 (FISMA), which establishes key elements of an agencywide information security program, and associated NIST guidelines and NARA requirements, we evaluated the effectiveness of NARA’s implementation of its security program by reviewing NARA’s risk assessment process and risk assessments for 10 systems to determine whether risks and threats were documented consistent with federal guidance; analyzing NARA policies, procedures, practices, and standards to determine their effectiveness in providing guidance to personnel responsible for securing information and information systems; analyzing security plans for 10 out of 43 systems to determine if management, operational, and technical controls were in place or planned and whether security plans reflected the current environment; examining the security awareness training process for employees and contractors to determine if they received training prior to system access; examining training records for personnel with significant responsibilities to determine if they received training commensurate with those responsibilities; analyzing NARA’s procedures and results for testing and evaluating security controls to determine whether management, operational, and technical controls were sufficiently tested at least annually and based on risk; evaluating NARA’s process to correct weaknesses and determining whether remedial action plans complied with federal guidance; reviewing incident detection and handling policies, procedures, and reports to determine the effectiveness of the incident handling program; examining contingency plans for 10 systems to determine whether those plans were developed and tested; and reviewing three IT contracts to determine if security requirements were included. We also discussed with key security representatives and management officials whether information security controls were in place, adequately designed, and operating effectively. To establish the reliability of NARA’s computer-processed data we performed an assessment. We evaluated the materiality of the data to our audit objectives and proceeded to assess the data by various means including: reviewing related documents, interviewing knowledgeable agency officials, and reviewing internal controls. Through a combination of methods we concluded that the data were sufficiently reliable for the purposes of our work. We conducted this performance audit from December 2009 to October 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objective. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. In addition to the individuals named above, Edward Alexander, Lon Chin, West Coile, Anjalique Lawrence, and Chris Warweg (Assistant Directors); Gary Austin; Angela Bell; Larry Crosland; Saar Dagani; Kirk Daubenspeck; Denise Fitzpatrick; Fatima Jahan; Mary Marshall; Sean Mays; Lee McCracken; Jason Porter; Michael Redfern; Richard Solaski; and Jayne Wilson made key contributions to this report. | The National Archives and Records Administration (NARA) is responsible for preserving access to government documents and other records of historical significance and overseeing records management throughout the federal government. NARA relies on the use of information systems to receive, process, store, and track government records. As such, NARA is tasked with preserving and maintaining access to increasing volumes of electronic records. GAO was asked to determine whether NARA has effectively implemented appropriate information security controls to protect the confidentiality, integrity, and availability of the information and systems that support its mission. To do this, GAO tested security controls over NARA's key networks and systems; reviewed policies, plans, and reports; and interviewed officials at nine sites. NARA has not effectively implemented information security controls to sufficiently protect the confidentiality, integrity, and availability of the information and systems that support its mission. Although it has developed a policy for granting or denying access rights to its resources, employed mechanisms to prevent and respond to security breaches, and made use of encryption technologies to protect sensitive data, significant weaknesses pervade its systems. NARA did not fully implement access controls, which are designed to prevent, limit, and detect unauthorized access to computing resources, programs, information, and facilities. Specifically, the agency did not always (1) protect the boundaries of its networks by, for example, ensuring that all incoming traffic was inspected by a firewall; (2) enforce strong policies for identifying and authenticating users by, for example, requiring the use of complex (i.e., not easily guessed) passwords; (3) limit users' access to systems to what was required for them to perform their official duties; (4) ensure that sensitive information, such as passwords for system administration, was encrypted so as not to be easily readable by potentially malicious individuals; (5) keep logs of network activity or monitor all parts of its networks for possible security incidents; and (6) implement physical controls on access to its systems and information, such as securing perimeter and exterior doors and controlling visitor access to computing facilities. In addition to weaknesses in access controls, NARA had mixed results in implementing other security controls. For example: (1) NARA did not always ensure equipment used for sanitization (i.e., wiping clean of data) and disposal of media (e.g., hard drives) was tested to verify correct performance. (2) NARA conducted appropriate background investigations for employees and contractors to ensure sufficient clearance requirements have been met before permitting access to information and information systems. (3) NARA did not consistently segregate duties among various personnel to ensure that no one person or group can independently control all key aspects of a process or operation. The identified weaknesses can be attributed to NARA not fully implementing key elements of its information security program. Specifically, the agency did not adequately assess risks facing its systems, consistently prepare and document security plans for its information systems, effectively ensure that all personnel were given relevant security training, effectively test systems' security controls, consistently track security incidents, and develop contingency plans for all its systems. Collectively, these weaknesses could place sensitive information, such as records containing personally identifiable information, at increased and unnecessary risk of unauthorized access, disclosure, modification, or loss. GAO is making 11 recommendations to the Archivist of the United States to implement elements of NARA's information security program. In commenting on a draft of this report, the Archivist generally concurred with GAO's recommendations but disagreed with some of the report's findings. GAO continues to believe that the findings are valid. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
We believe that the United States generally achieved its negotiating objectives in the Uruguay Round, and most studies we reviewed projected net economic gains to the United States and the world economy. The General Agreement on Tariffs and Trade (GATT) Uruguay Round agreements are the most comprehensive multilateral trade agreements in history. For example, signatories (1) agreed to open markets by reducing tariff and nontariff barriers; (2) strengthened multilateral disciplines on unfair trade practices, specifically rules concerning government subsidies and “dumping;” (3) established disciplines to cover new areas such as intellectual property rights and trade in services; (4) expanded coverage of GATT rules and procedures over areas such as agriculture and textiles and clothing; and (5) created WTO, which replaced the preexisting GATT organizational structure and strengthened dispute settlement procedures. Despite expectations for overall economic gains, we noted in recent reports that specific industry organizations and domestic interest groups had concerns that the agreement would adversely affect some U.S. interests. For example, some believe that they did not gain adequate access to overseas markets or that they would lose protection provided by U.S. trade laws. In addition, because some sectors of the U.S. economy—notably textiles and apparel—and their workers will likely bear the costs of economic adjustment, the existing patchwork of reemployment assistance programs aimed at dislocated workers needs to be improved. Our work has indicated that it was difficult to predict outcomes and not all the effects of such a wide-ranging agreement will become apparent in the near term; important issues will evolve over a period of years during GATT implementation. We have identified provisions to be monitored to assure that commitments are fulfilled and the expected benefits of the agreements are realized. Moreover, our work on the GATT Tokyo Round agreements, negotiated in the 1970s, and numerous bilateral agreements has demonstrated that trade agreements are not always fully implemented. Implementation of the Uruguay Round agreements, which generally began to go into force on January 1, 1995, is complex, and it will take years before the results can be assessed. Nevertheless, our work highlights the following issues: (1) the WTO’s organizational structure and the secretariat’s budget have grown from 1994 to 1996 to coincid with new duties and responsibilities approved by the member countries; (2) faced with over 200 requirements, many member nations have not yet provided some of the notifications of laws or other information as called for in the agreements; (3) this year provides the first opportunity to review whether anticipated U.S. gains in agriculture will materialize, as countries begin to report on meeting their initial commitments; (4) the new agreements require that food safety measures be based on sound science, but U.S. agricultural exporters seem to be encountering more problems with other countries’ measures and a number of formal disputes have already been filed with WTO; (5) while efforts are underway to improve transparency provisions regarding state trading, these provisions alone may not be effective when applied to state-dominated economies, like China and Russia, seeking to join WTO; (6) while textile and apparel quotas will be phased out over 10 years, the United States has continued to use its authority to impose quotas during the phase-out period and will not lift most apparel quotas until 2005; (7) despite the end of the Uruguay Round, some areas, like services, are still subject to ongoing negotiations; (8) there were 25 disputes brought before WTO in 1995 by various countries, including some involving the United States. The United States lost the first dispute settlement case regarding U.S. gasoline regulations brought by Brazil and Venezuela and is now appealing that decision. The WTO was established to provide a common institutional framework for multilateral trade agreements. Some observers have been concerned about the creation of this new international organization and its scope and size. The “new” WTO was based on a similar “provisional” GATT organizational structure that had evolved over decades. The Uruguay Round agreements created some new bodies; however, these new bodies address new areas of coverage, for example, the Councils for Trade in Services and for Trade-Related Aspects of Intellectual Property Rights. Other bodies, such as the WTO Committee on Anti-Dumping Practices, were “reconstituted” from those that already existed under the old GATT framework but that were given new responsibilities by the Uruguay Round agreements and had broader membership. The WTO secretariat, headed by its Director General, facilitates the work of the members. The work of the bodies organized under the WTO structure is still undertaken by representatives of the approximately 119 member governments, rather than the secretariat. Early meetings of some WTO committees were focused on establishing new working procedures and work agendas necessary to implement the Uruguay Round agreements. In 1995, the WTO secretariat staff was composed of 445 permanent staff with a budget of about $83 million. This represented a 18-percent staff increase and about a 7-percent increase in the budget (correcting for inflation) from 1994 when the Uruguay Round agreements were signed. The members establish annual budgets and staff levels. The approved secretariat’s 1996 budget represents a 10-percent rise over the 1995 level to further support the organization’s wider scope and new responsibilities; it also includes an additional 15-percent increase in permanent staff. WTO officials in Geneva have told us that any additional increases in secretariat staffing are unlikely to be approved by the members in the foreseeable future. The secretariat’s duties include helping members organize meetings, gathering and disseminating information, and providing technical support to developing countries. Economists, statisticians, and legal staff provide analyses and advice to members. In the course of doing work over the last year, member government and secretariat officials told us it was important that the secretariat continue to not have a decision-making or enforcement role. These roles were reserved for the members (collectively). An important, but laborious, aspect of implementing the Uruguay Round agreements centers on the many notification requirements placed upon member governments. These notifications are aimed at increasing transparency about members’ actions and laws and therefore encourage accountability. Notifications take many forms. For example, one provision requires countries to file copies of their national legislation and regulations pertaining to antidumping measures. The information provided allows members to monitor each others’ activities and, therefore, to enforce the terms of the agreements. In 1995, some WTO committees began reviewing the notifications they received from member governments. The WTO Director General has noted some difficulties with members’ fulfilling their notification requirements. Some foreign government and WTO secretariat officials told us in 1995 that the notification requirements were placing a burden on them and that they had not foreseen the magnitude of information they would be obligated to provide. The Director General’s 1995 annual report estimated that the Uruguay Round agreements specified over 200 notification requirements. It also noted that many members were having problems understanding and fulfilling the requirements within the deadlines. While the report said that the developing countries faced particular problems, even the United States has missed some deadlines on filing information on subsidies and customs valuation laws. To address concerns about notifications, WTO members formed a working party in February 1995 to simplify, standardize, and consolidate the many notification obligations and procedures. One area of great economic importance to the United States during the Uruguay Round negotiations was agriculture; therefore, monitoring other countries’ implementation of their commitments is essential to securing U.S. gains. Agricultural trade had traditionally received special treatment under GATT. For example, member countries were allowed to maintain certain measures in support of agricultural trade that were not permitted for trade in manufactured goods. As a result, government support and protection distorted international agricultural trade and became increasingly expensive for taxpayers and consumers. The United States sought a fair and more market-oriented agricultural trading system, to be achieved through better rules and disciplines on government policies regarding agriculture. The United States sought disciplines in four major areas—market access, export subsidies, internal support, and food safety measures—and was largely successful, as the Agreement on Agriculture and the Agreement on the Application of Sanitary and Phytosanitary (SPS) Measures together contain disciplines in all of these areas. Member countries are required to report to the new WTO Committee on Agriculture on their progress in implementing commitments on market access, export subsidies, and internal support. The agriculture agreement will be implemented over a 6-year period, and commitments are to be achieved gradually. After each year, countries are required to submit data to demonstrate how they are meeting their various commitments. The agreement allows countries to designate their own starting point for implementation during 1995, depending on domestic policies. In this regard, the U.S. period began on January 1, 1995, while the European Union (EU) period began on July 1, 1995. Therefore, in some cases, the first opportunity to closely review the extent to which other countries are meeting their agricultural commitments—and, thereby, whether anticipated U.S. gains are materializing—should occur later this year. At the outset of the Uruguay Round, the United States recognized that multilateral efforts to reduce traditional methods of protection and support for agriculture, such as quotas, tariffs, and subsidies, could be undermined if the use of food safety measures governing imports remained undisciplined. To prevent food safety measures from being used unjustifiably as nontariff trade barriers, the United States wanted countries to agree that these measures should be based on sound science. The SPS agreement recognizes that countries have a right to adopt measures to protect human, animal, and plant life or health. However, it requires, among other things, that such measures be based on scientific principles, incorporate assessment of risk, and not act as disguised trade restrictions. Carefully monitoring how countries implement the SPS agreement is essential to securing U.S. gains in agriculture. Since the end of the round, U.S. agricultural exporters seem to be encountering growing numbers of SPS-related problems. For example, South Korean practices for determining product shelf-life adversely affected U.S. meat exports and were the subject of recent consultations. As a result, Korea agreed to modify its practices. Meanwhile, the United States and Canada have both filed several other disputes that allege violations of the SPS agreement. Key implementation and monitoring issues regarding the SPS agreement include examining (1) other countries’ SPS measures that affect U.S. agricultural exports; (2) how the SPS agreement is being implemented; (3) whether its provisions will help U.S. exporters overcome unjustified SPS measures; and (4) how the administration is responding to problems U.S. exporters face. We have ongoing work addressing all of these issues. Another issue that is currently important for agricultural trade but may have great future importance beyond agriculture is the role of state trading enterprises within WTO member countries. State trading enterprises (STE) are generally considered to be governmental or nongovernmental enterprises that are authorized to engage in trade and are owned, sanctioned, or otherwise supported by the government. They may engage in a variety of activities, including importing and exporting, and they exist in several agricultural commodity sectors, including wheat, dairy, meat, oilseeds, sugar, tobacco, and fruits. GATT accepts STEs as legitimate participants in trade but recognizes they can be operated so as to create serious obstacles to trade, especially those with a monopoly on imports or exports. Therefore, STEs are generally subject to GATT disciplines, including provisions that specifically address STE activities and WTO member country obligations. For example, member countries must indicate whether they have STEs, and if so, they must report regularly about their STEs’ structure and activities. The goal of this reporting requirement is to provide transparency over STE activities in order to understand how they operate and what effect they may have on trade. However, as we reported in August 1995, compliance with this reporting requirement was poor from 1980 to 1994, and information about STE activities was limited. Although state trading was not a major issue during the Uruguay Round, the United States proposed clarifying the application of all GATT disciplines to STEs and increasing the transparency of state trading practices. Progress was made in meeting U.S. objectives, as the Uruguay Round (1) enhanced GATT rules governing STEs, (2) addressed procedural weaknesses for collecting information, and (3) established a working party to review the type of information members report. Within this working party, the United States is suggesting ways to make STE activities even more transparent. It is too early to assess whether the changes made will improve compliance with the STE reporting requirements. By mid-February, only 34 WTO members had met the requirement—or roughly 29 percent of all members. Still, this response rate is higher than during the earlier years we reviewed. We continue to examine this important issue and are presently reviewing the operations of select STEs. Looking toward the future, officials from the United States and other countries told us in 1995 they were concerned about the sufficiency of GATT rules regarding STEs because countries like China and Russia, where the state has a significant economic role, are interested in joining WTO. Some country officials observed that current rules focus on providing transparency, but such provisions alone may not provide effective disciplines. U.S. officials said that the subject of state trading has been prominent during China’s WTO accession talks as WTO members attempt to understand the government’s economic role and its ability to control trade. Textiles is one sector where the United States expected losses in jobs and in domestic market share after the Uruguay Round, even though consumers were expected to gain from lower prices and a greater selection of goods. We are currently reviewing how the United States is implementing the Uruguay Round Agreement on Textiles and Clothing, which took effect in January 1995. The Committee for the Implementation of Textile Agreements (CITA), an interagency committee, is charged with implementing the agreement, which calls for a 10-year phase-out of textile quotas. Because of the 10-year phase-out, the effects of the textiles agreement will not be fully realized until 2005, after which textile and apparel trade will be fully integrated into WTO and its disciplines. This integration is to be accomplished by (1) completely eliminating quotas on selected products in four stages and (2) increasing quota growth rates on the remaining products at each of the first three stages. By 2005, all bilateral quotas maintained under the agreement on all WTO member countries are to be removed. The agreement gives countries discretion in selecting which products to remove from quotas at each stage. During the first stage (1995 through 1997), almost no products under quota were integrated into normal WTO rules by the major importing countries. The United States is the only major importing country to have published an integration list for all three stages; other countries, such as the EU and Canada, have only published their integration plan for the first phase. Under the U.S. integration schedule, 89 percent of all U.S. apparel products under quota in 1990 will not be integrated into normal WTO rules until 2005. CITA officials pointed out that the Statement of Administrative Action accompanying the U.S. bill to implement the Uruguay Round agreements provided that “integration of the most sensitive products will be deferred until the end of the 10-year period.” During the phase-out period, the textiles agreement permits a country to impose a quota only when it determines that imports of a particular textile or apparel product are harming, or threatening to harm, its domestic industry. The agreement further provides that the imposition of quotas will be reviewed by a newly created Textiles Monitoring Body consisting of representatives from 10 countries, including the United States. The United States is the only WTO member country thus far to impose a new quota under the agreement’s safeguard procedures. In 1995, the United States requested consultations with other countries to impose quotas on 28 different imports that CITA found were harming domestic industry. The Textiles Monitoring Body has reviewed nine of the U.S. determinations to impose quotas (where no agreement was reached with the exporting country) and agreed with the U.S. determination in one case. In three cases, it did not agree with the U.S. decision, and the United States dropped the quotas. It could not reach consensus in the other five cases it reviewed. In 15 of the remaining 19 decisions, the United States either reached agreement with the exporting countries or dropped the quotas. Four cases are still outstanding. Another area that warrants tracking by policymakers is the General Agreement on Trade in Services (GATS), an important new framework agreement resulting from the Uruguay Round. Negotiations on financial, telecommunications, and maritime service sectors and movement of natural persons were unfinished at the end of the round and thus postponed. Each negotiation was scheduled to be independent from the other ongoing negotiations, but we found that they do in fact affect one another. In 1995, we completed a preliminary review of the WTO financial services agreement, which was an unfinished area in services that reached a conclusion. The agreement covers the banking, securities, and insurance sectors, which are often subject to significant domestic regulation and therefore create complex negotiations. In June 1995, the United States made WTO commitments to not discriminate against foreign firms already providing financial services domestically. However, the United States took a “most-favored-nation exemption,” that is, held back guaranteeing complete market access and national treatment to foreign financial service providers. (Doing so is allowed under the GATS agreement.) Specifically, the U.S. commitment did not include guarantees about the future for new foreign firms or already established firms wishing to expand services in the U.S. market. Despite consistent U.S. warnings, the decision to take the exemption surprised many other countries and made them concerned about the overall U.S. commitment to WTO. The U.S. exemption in financial services was taken because U.S. negotiators, in consultation with the private sector, concluded that other countries’ offers to open their markets to U.S. financial services firms, especially those of certain developing countries, were insufficient to justify broader U.S. commitments (with no most-favored-nation exemption). The effect of the U.S. exemption may go beyond the financial services negotiations. According to various officials in Geneva, foreign governments are wary of making their best offers in the telecommunications service negotiations, for fear that the United States would again take a significant exemption in these talks. Nevertheless, three-quarters of the participating countries have made offers, and the telecommunications talks are continuing toward the April 30 deadline. However, U.S. and foreign government officials have expressed concern regarding the quality of offers made and the fact that some key developing countries have not yet submitted offers. Despite the commitments that all parties made regarding market access and equal treatment in the financial services sector, several U.S. private sector officials told us that the agreement itself did little to create greater access to foreign markets. Still, the benefit from such an agreement results from governments making binding commitments (enforceable through the dispute settlement process) that reduce uncertainty for business. Monitoring foreign government implementation of commitments is important to ensure that the United States will receive the expected benefits. At the end of 1997, countries, including the United States, will have an opportunity to modify or withdraw their commitments. Thus, the final outcome and impact of the financial services agreement are still uncertain. According to the WTO Dispute Settlement Understanding, the dispute settlement regime is important because it is a central element in providing security and predictability to the multilateral trading system. Members can seek the redress of a violation of obligations or other nullification or impairment of benefits under the WTO agreements through the dispute settlement regime. The objective of this mechanism is to secure a “positive solution” to a dispute. This may be accomplished through bilateral consultations even before a panel is formed to examine the dispute. The vast majority of international trade transactions have not been the subject of a WTO dispute. According to recent WTO figures, in 1994 the total value of world merchandise exports was $4 trillion and commercial service exports was $1 trillion. WTO reports that its membership covers about 90 percent of world trade. However, 25 disputes have been brought before WTO between January 1, 1995, and January 16, 1996. As we previously reported, the former GATT dispute settlement regime was considered cumbersome and time-consuming. Under the old regime, GATT member countries delayed dispute settlement procedures for months and, sometimes, years. In 1985, we testified that the continued existence of unresolved disputes challenged not only the principles of GATT but the value of the system itself. We further stated that the member countries’ lack of faith in the effectiveness of the old GATT dispute settlement mechanism resulted in unilateral actions and bilateral understandings that weakened the multilateral trading system. The United States negotiated for a strengthened dispute settlement regime during the Uruguay Round. In particular, the United States sought time limits for each step in the dispute settlement process and elimination of the ability to block the adoption of dispute settlement panel reports. The new Dispute Settlement Understanding establishes time limits for each of the four stages of a dispute: consultation, panel, appeal, and implementation. Also, unless there is unanimous opposition in the WTO Dispute Settlement Body, the panel or appellate report is adopted. Further, the recommendations and rulings of the Dispute Settlement Body cannot add to or diminish the rights and obligations provided in the WTO agreements. Nor can they directly force countries to change their laws or regulations. However, if countries choose not to implement the recommendations and rulings, the Dispute Settlement Body may authorize trade retaliation. As previously mentioned, there have been a total of 25 WTO disputes. Of these, the United States was the complainant in six and the respondent in four. In comparison, Japan was a respondent in four disputes and the EU in eight. All the disputes have involved merchandise trade. The Agreements on Technical Barriers to Trade and the Application of Sanitary and Phytosanitary Measures have been the subject of approximately half the disputes. In January 1996, the first panel report under the new WTO dispute settlement regime was issued on the “Regulation on Fuels and Fuels Additives - Standards for Reformulated and Conventional Gasoline.” Venezuela and Brazil brought this dispute against the United States. The panel report concluded that the Environmental Protection Agency’s regulation was inconsistent with GATT. The United States has appealed this decision. Based on our previous work on dispute settlement under the U.S.-Canadian Free Trade Agreement (CFTA), it may be difficult to evaluate objectively the results of a dispute settlement process. It may takes years before a sufficiently large body of cases exists to make any statistical observations about the process. After nearly 5 years of trade remedy dispute settlement cases under CFTA, there were not enough completed cases for us to make statistical observations with great confidence. Specifically, we were not able to come to conclusions about the effect of panelists’ backgrounds, types of U.S. agency decisions appealed, and patterns of panel decisionmaking. WTO members must wrestle with three competing but interrelated endeavors in the coming years. Implementation, accession of new member countries, and bringing new issues to the table will all compete for attention and resources. The first effort, which we have already discussed, involves implementing the Uruguay Round agreements. It will take time and resources to (1) completely build the WTO organization so that members can address all its new roles and responsibilities; (2) make members’ national laws, regulations, and policies consistent with new commitments; (3) fulfill notification requirements and then analyze the new information; and (4) resolve differences about the meaning of the agreements and judge whether countries have fulfilled their commitments. The importance of implementation was underscored by U.S. Trade Representative and Department of Commerce announcements earlier this year that they were both creating specific units to look at foreign government compliance with trade agreements, including WTO. The second effort is the accession of new countries to join WTO and to undertake GATT obligations for the first time. The accession of new members will present significant economic and political challenges over the next few years. Even though, as mentioned earlier, WTO members account for about 90 percent of world trade, there are many important countries still outside the GATT structure. The 28 countries that applied for WTO membership as of December 1995 included China, the Russian Federation, Viet Nam, and countries in Eastern Europe. These countries will be challenged in undertaking WTO obligations and fulfilling WTO commitments as current WTO members are themselves challenged by the additional responsibilities created by the Uruguay Round agreements. Many of these countries are undergoing a transition from centrally planned to market economies. The negotiations between current WTO members and those hoping to join are very complex and sensitive since they involve such fundamental issues as political philosophy. The third effort is negotiating new areas. In December 1996, a WTO ministerial meeting is to take place in Singapore. This is to be a forum for reviewing implementation of the Uruguay Round agreements and for negotiating new issues. Some foreign government and WTO officials told us that they hope these regularly scheduled, more focused WTO ministerial meetings will replace the series of multiyear, exhaustive negotiating “rounds” of the past. However, other officials expressed doubt that much progress could be made toward future trade liberalization without the pressure created by having a number of important issues being negotiated at one time. Nevertheless, any negotiations will require time and resources. Members are debating whether to (1) push further liberalization in areas already agreed to, but not yet fully implemented; and/or (2) negotiate new issues related to international trade. For example, future WTO work could include examination of national investment and competition policy, labor standards, immigration, and corruption and bribery. Some of these negotiations in new areas could be quite controversial, based on the experience of including areas like agriculture and services in the Uruguay Round negotiating agenda. Issues relating to the Singapore ministerial are currently under debate. This could be an opportunity for Congress to weigh the benefit of having U.S. negotiators give priority to full implementation of Uruguay Round commitments, as opposed to giving priority to advocating new talks on new topics. The first priority seeks to consolidate accomplishments and ensure that U.S. interests are secured; the latter priority seeks to use the momentum of the Uruguay Round for further liberalizations. Thank you, Mr. Chairman, this concludes my prepared remarks. I will be happy to answer any questions you or the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the implementation of the General Agreement on Tariffs and Trade's Uruguay Round agreements and the operation of the World Trade Organization (WTO). GAO noted that: (1) the U.S. has generally achieved its negotiating objectives in the Uruguay Round; (2) the agreements are expected to open markets by reducing trade barriers and unfair trade practices; (3) some U.S. industries and domestic interests are concerned that the agreements will have adverse effects; (4) implementation of the agreements is complex and its effects will not be known for many years; (5) the United States needs to monitor the agreements' implementation to ensure that member countries honor their commitments and the expected benefits are realized; (6) the WTO organizational structure and the secretariat's budget have grown in relation to its expanded responsibilities; (7) several import and export issues involving the service, textile, and agriculture industries continue to be disputed and are awaiting settlement; (8) many member countries have not met their notification requirements so that other member countries can monitor and enforce agreement terms; and (9) WTO members need to address how to allocate its resources, how to assimilate new countries into WTO, and whether to pursue liberalization in areas already agreed upon or initiate negotiations on new topics. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
While there is substantial variation among grant types, competitively awarded federal grants generally follow a life cycle comprising various stages—pre-award (announcement and application), award, implementation, and closeout—as seen in figure 1. Once a grant program is established through legislation, which may specify particular objectives, eligibility, and other requirements, a grant-making agency may impose additional requirements on recipients. For competitive grant programs, the public is notified of the grant opportunity through an announcement, and potential recipients must submit applications for agency review. In the award stage, the agency identifies successful applicants or legislatively defined grant recipients and awards funding. The implementation stage includes payment processing, agency monitoring, and recipient reporting, which may include financial and performance information. The closeout phase includes preparation of final reports, financial reconciliation, and any required accounting for property. Audits may occur multiple times during the life cycle of the grant and after closeout. Federal agencies do not have inherent authority to enter into grant agreements without affirmative legislative authorization. In authorizing grant programs, federal laws identify the types of activities that can be funded and the purposes to be accomplished through the funding. Legislation establishing a grant program frequently will define the program objectives and leave the administering agency to fill in the details by regulation. Adding to the complexity of grants management, grant programs are typically subject to a wide range of accountability requirements (under their authorizing legislation or appropriation) and implementing regulations, which are intended to ensure that funding is spent for its intended purpose. Congress may also impose increased reporting and oversight requirements on grant-making agencies and recipients. In addition, grant programs are subject to crosscutting requirements applicable to most assistance programs. OMB is responsible for developing government-wide policies to ensure that grants are managed properly and that grant funds are spent in accordance with applicable laws and regulations. For decades, OMB has published guidance in various circulars to aid grant-making agencies with such subjects as audit and record keeping and the allowability of costs. In the past 14 years, since the passage of P.L. 106-107, there has been a series of legislative- and executive-sponsored initiatives aimed at simplifying aspects of the grants management life cycle; minimizing the administrative burden for grantees, particularly those that obtain grants from multiple federal agencies; and ensuring accountability by improving the transparency of the federal grants life cycle. See figure 2 for more information. Governance initiatives: Intended to make changes that affect policy and oversight Process initiatives: Intended to simplify aspects of the grants lifecycle Transparency initatives: Intended to increase the transparency of information detailing federal awards and expenditures To print text version of this graphic, go to appendix II. Since the passage of P.L. 106-107, OMB and other entities involved with federal grants management have overseen several ongoing initiatives intended to address the challenges grantees encounter throughout the grants life cycle. These initiatives include consolidating and revising grants management circulars, simplifying the pre-award phase, promoting shared IT solutions for grants management, and improving the timeliness of grant closeout and reducing undisbursed balances. However, management and coordination challenges could hinder the progress of some of these initiatives. As part of the effort to implement P.L. 106-107, OMB began an effort in 2003 to (1) consolidate its government-wide grants guidance, which was located in seven separate OMB circulars and policy documents, into a single title in the Code of Federal Regulations, and (2) establish a centralized location for grant-making agencies to publish their government-wide grant regulations. The purpose of this effort was to make it easier for grantees to find and use the information in the OMB circulars and agencies’ grant regulations by creating a central point for all grantees to locate all government-wide grants requirements. As of March 2013, OMB has completed revisions on guidance related to two areas—- suspension and debarment and drug-free workplace. All grant-making agencies have relocated their suspension and debarment regulations to one title of the Code of Federal Regulations and some have relocated the drug-free workplace regulations. OMB has also been consulting with stakeholders to evaluate potential reforms in federal grant policies contained in the multiple grant circulars. As a first step, in February 2012, OMB published an advanced notice of proposed guidance detailing a series of reform ideas that would standardize information collection across agencies, adopt a risk-based model for single audits (annual audits required of nonfederal entities that expend more than $500,000 in federal awards annually), and provide new administrative approaches for determining and monitoring the allocation of federal funds. After receiving more than 350 public comments on the advanced notice of proposed guidance, OMB published its circular reform proposal in February 2013 and plans to implement the reforms by December 2013. OMB officials believe that once implemented, these reforms have the potential to make grant programs more efficient and effective by eliminating unnecessary and duplicative requirements and strengthening the oversight of grant dollars by focusing on areas such as eligibility, monitoring of subrecipients, and adequate reporting. Launched in 2003, Grants.gov is a website the public can use to search and apply for federal grant opportunities. Officials we spoke to from associations representing state and local governments, universities, and nonprofits praised Grants.gov. Many noted that it simplified the pre-award stage by making it easier for applicants to search for and identify federal grant funding opportunities. Specifically, one organization said the site does an excellent job categorizing grants by topic, making it easier for resource-constrained applicants that may not have a professional grant writer to search for relevant grants. However, grantee association officials also raised concerns about aspects of the site. For example, although there is an OMB policy directive establishing a standard format for federal funding opportunity announcement requirements, grantee officials said that in practice the lack of a standardized grants announcement can increase their burden because extra time is required to determine eligibility and other requirements. We have also reported that persistent management challenges, such as a lack of performance measures and communication with stakeholders and unclear roles and responsibilities among the governance entities, have adversely affected Grants.gov operations. Since we first reported on these issues in July 2009, HHS has made some progress to address these challenges and increase the effectiveness and long-term viability of Grants.gov. Specifically, HHS is taking steps to implement several of our prior recommendations. For example, in 2012, the Program Management Office (PMO) adopted a performance monitoring tool that currently monitors 22 technical measures covering availability, usage, and performance. The PMO also hired a communications director whose responsibilities include outreach to stakeholders. The PMO reported that starting in fiscal year 2013, HHS plans to more actively solicit input from grants applicants on ways to enhance the site. While it is too soon to determine the effectiveness of these reforms, tracking site performance and developing an effective two- way communication strategy to engage with stakeholders are practices which, if thoughtfully and deliberately implemented, may address the challenges we identified. Promoting shared information technology (IT) solutions for managing grants—an original goal of P.L. 106-107 and the governance bodies charged with implementing the legislation—could provide an additional way to simplify post-award grants management activities by consolidating the administration and management of grants across agencies and potentially reducing the costs of multiple agencies developing and maintaining grants management systems. However, it is unclear whether promoting shared IT systems for grants management is still a priority, and if so, which agency is in charge of this effort. In 2004, OMB established the GMLOB to develop government-wide solutions intended to support end-to-end grants management activities, including shared grants management systems (which could include modules for intake of applications, peer review, award, payment, and performance monitoring and final closeout of the grant award). In 2005, OMB chose three agencies—the National Science Foundation (NSF), the Administration for Children and Families within the Department of Health and Human Services (HHS), and the Department of Education—to develop grants management systems that they could provide for other agencies. Currently, NSF operates Research.gov, which has one other external agency customer that uses individual modules of the Research.gov system; the Administration for Children and Families operates GrantSolutions.gov, which services 17 government customers, 8 of which are HHS components; and Education operates G5, which has 13 customers all of which are Education components (see appendix III for a list of NSF, HHS, and Education customers). Since 2012, there has been uncertainty regarding the status of and future plans for the operational elements of what was the GMLOB. OMB folded GMLOB into the Financial Management Line of Business (FMLOB)—an initiative focused on financial systems improvements— in 2012, and initially announced the Treasury Department would be the managing partner. Later, OMB informed us the General Services Administration (GSA) would be the managing partner, but GSA officials informed us they were only the managing partner of the FMLOB from June to September 30, 2012. GSA officials also told us that according to OMB officials, GSA would not be responsible for working with NSF, HHS, or Education, or promoting shared service agreements for grants management systems. As of March 2013, OMB had not publicly announced who the managing partner of FMLOB would be for fiscal year 2013. After receiving a draft copy of this report for its review and comment, OMB issued a “Controller Alert” on April 29, 2013, announcing that, for fiscal year 2013, the Department of the Treasury’s Office of Financial Innovation and Transformation (FIT) will serve as Managing Partner and the Program Management Office for the FMLOB. OMB also highlighted the Controller Alert in its comment letter to us, also dated April 29, 2013 (see appendix IV for OMB’s letter). In May 2012, OMB issued guidance directing agencies to find ways to spend federal dollars on IT more efficiently to compensate for a 10 percent reduction in overall IT spending. The guidance also directed agencies to propose how they would reinvest the savings from proposed cuts to produce a favorable return on investments. One of the strategies OMB had previously highlighted to reduce duplication, improve collaboration, and eliminate waste across agency boundaries was the Federal IT Shared Services Strategy, also referred to as “Shared First,” an effort to share common IT services across agencies. The guidance did not specifically mention grants management systems, and it is unclear whether OMB intends to encourage other agencies to partner with NSF, HHS, and Education to continue sharing services. In its April 29, 2013, Controller Alert, OMB stated that in accordance with OMB’s guidance on shared services, the Treasury’s FIT will “lead efforts to transform federal financial management, reduce costs, increase transparency, and improve delivery of agencies’ missions by operating at scale, relying on common standards, shared services, and using state-of-the-art technology.” However, OMB’s Controller Alert did not address whether the roles of NSF, HHS, and Education would change as a result of FIT’s leadership in this area. As part of its efforts to improve grants management government-wide, OMB has instructed agencies to improve the timeliness of their grant closeout procedures. Once the grant’s period of availability to the grantee has expired, the grant can be closed out and the funds deobligated by the awarding agency. Timely closeout helps to ensure that grantees have met all financial and reporting requirements. It also allows federal agencies to identify and redirect unused funds to other projects and priorities as authorized or to return unspent balances to the Department of the Treasury. In August 2008, we reported that during calendar year 2006 about $1 billion in undisbursed funding remained in expired grant accounts in the largest civilian payment system for grants, the Payment Management System. In a follow-up report issued in April 2012, we found that at the end of fiscal year 2011 there was more than $794 million in funding remaining in expired grant accounts. To improve the timeliness of grant closeout, we recommended that OMB instruct all executive departments and independent agencies to annually track the amount of undisbursed grant funding remaining in expired grant accounts and report on the status and resolution of the undisbursed funding in their annual performance plan and annual performance and accountability report. In response to our recommendations, on July 24, 2012, the Controller of OMB issued a “Controller Alert” to all federal chief financial officers instructing agencies to take appropriate action to close out grants in a timely manner. The alert provided strategies agencies should consider to achieve this goal, including establishing annual or semiannual performance targets for timely grant closeout, monitoring closeout activity, and tracking progress in reducing closeout backlog. In a September 2012 report, we identified certain key features for effective interagency collaborative efforts, including the importance of identifying goals for short-and long-term outcomes. Identifying goals can help decision makers reach a shared understanding of what problems genuinely need to be fixed, how to balance differing objectives, and what steps need to be taken to create not just short-term advantages but long- term gains. In February 2013, COFAR posted five priority goals for fiscal years 2013 to 2015 to the U.S. Chief Financial Officers Council website: 1. Implement revised guidance to target risk and reduce administrative burden. 2. Standardize federal agencies’ business processes to streamline data collections. 3. Provide public validated financial data that aligns spending information with core financial accounting data in coordination with the work of the GATB. 4. Ensure that federal agencies’ grants professionals are highly qualified. 5. Reduce the number of unclean audit opinions for grant recipients. For each priority, COFAR identified proposed deliverables and milestone dates for those deliverables. As of May 2013, COFAR had not released to the public an implementation plan that includes other key elements such as performance targets, mechanisms to monitor, evaluate, and report on progress made towards stated goals, and goal leaders who can be held accountable for those goals. Establishing implementation goals and tracking progress toward those goals helps to pinpoint performance shortfalls and suggest midcourse corrections, including any needed adjustments to future goals and milestones. Reporting on these activities can help key decision makers within the agencies, as well as stakeholders, obtain feedback for improving both policy and operational effectiveness. In response to the draft report we provided for them to review, OMB officials stated in their comment letter dated April 29, 2013, that they used a more detailed internal project plan to monitor timelines and roles and responsibilities. They acknowledged that more needs to be done by pointing out that as the work of COFAR matured, the council would be better able to articulate metrics that allowed for a more thorough evaluation of whether the policy changes were having their intended impacts. They added that the publically-stated deliverables were intended to leave room for further evolution of the right approach for implementation. While we have not been able to assess or validate OMB’s newly provided information on COFAR’s approach, we believe a more detailed, publically-available implementation plan that will allow Congress and the public to better monitor the progress of the reforms is needed. We previously reported that when interagency councils clarify who will do what, identify how to organize their joint and individual efforts, and articulate steps for decision making, they enhance their ability to work together and achieve results. In interviews with federal grant management officials we were told that OMB and the council do not always clearly articulate the roles and responsibilities for various streamlining initiatives, plans for future efforts, and means for engaging small grant-making agency stakeholders and utilizing agency resources. Agency officials involved with current grants management reforms told us that the roles and responsibilities for various streamlining initiatives are not always clear. For example, OMB designated Treasury as the managing partner of the FMLOB initiative, then designated GSA as the managing partner, but only for four months. As of March 2013, OMB had not issued a subsequent announcement as to which agency would take over the grants management related functions of FMLOB after GSA. In the meantime, the former GMLOB consortia leads are unsure whether promoting shared grants management systems is still a priority. As previously mentioned, OMB’s Controller Alert of April 29, 2013, announced that Treasury’s FIT office will serve as Managing Partner and the Program Management Office for the FMLOB for fiscal year 2013. However, the Controller Alert did not address whether the roles of NSF, HHS, and Education would change as a result of FIT’s leadership in this area. In addition to OMB, eight agencies are permanent members of COFAR. COFAR also has a rotating member, currently NSF, which serves a two- year term. Agency officials involved with COFAR told us that the council is still determining the role of the rotating agency and how COFAR will reach out to smaller grant-making agencies not on the council. According to OMB officials, they are still working out how to provide other agencies with a communication channel and the opportunity to review and comment on proposed changes. In its April 29, 2013, comment letter, OMB acknowledged that the expectation was that the rotating member would be able to represent the views of smaller agencies and that there may be federal officials or agencies that wish to be more involved or not fully aware of the all the COFAR’s work. OMB officials also stated that COFAR staff will help the rotating agency gather input and feedback from the broader collection of smaller agencies. OMB officials said incorporating the views of all federal grant-making agencies was essential to the work of the COFAR and that their strategy would continue to evolve over time, as it will for engaging with nonfederal stakeholders. Agency officials also told us that they are still trying to determine how to bring together financial, policy, and IT staff, and incorporate their areas of expertise into discussions on proposed policy and program changes. One agency official noted this had been a challenge with the previous grants management structure. She said that the GPC focused on policy and the GEB focused on systems and technology solutions and, even though there was some level of overlap among the people staffing the two boards, a stronger connection was needed to ensure that streamlining efforts included technology and policy expertise. In their comment letter, OMB officials stated they made repeated efforts to solicit the views of all federal agencies through town hall meetings, formal circulation of draft policies for comment prior to publication, and conference calls to share information on key issues. We have noted that communication is not just “pushing the message out,” but should facilitate a two-way, honest exchange and allow for feedback from relevant stakeholders. We previously reported that grantees felt that the lack of opportunities to provide timely feedback resulted in poor implementation and prioritization of streamlining initiatives and limited grantees’ use and understanding of new functionality of electronic systems. For example, grantees experienced problems stemming from policies and technologies that were inconsistent with their business practices and caused inefficiencies in their administration of grants. Members of the grantee community told us they continue to have concerns because they do not see a role for themselves as OMB and COFAR develop priorities for reforming federal grants management. For example, officials from the eight associations representing state and local governments, universities, and nonprofit recipients told us that outreach to grantees on proposed reforms continues to be inconsistent or could be improved. Ten organizations representing state and local officials, including some of the same organizations we interviewed, submitted a letter to OMB after the creation of COFAR was announced, expressing their disappointment that there would be no state or local representation on the council. In the letter, the state and local officials stated that formal engagement of all stakeholder parties is necessary for success and that their exclusion from the council undermined the important work of the council before it even commenced. OMB officials stated they are seeking different forums to engage with members of the grantee community. Several association officials said they appreciated that OMB reached out to them for comment before proposing changes to OMB circulars. OMB and COFAR also hosted a webinar in February 2013 to coincide with the circular reform proposal, and invited representatives from grantee associations to discuss their concerns and ask questions. In addition, following their review of the draft report, OMB officials provided us with a list of invitations for speaking engagements they have accepted since February 2013 as a snapshot of the types of engagements they participate in to communicate with interested stakeholder groups. While improved outreach to the broader grantee community is an ongoing challenge, certain groups of grantees have established communication channels with the federal government. These approaches could be a useful model for COFAR to build upon with different grantee communities. For example, we have previously reported that the research community established avenues of communication with relevant federal agencies through the Federal Demonstration Partnership (FDP), a cooperative initiative of 10 agencies and over 90 research institutions. Agency officials and members of the research community continue to describe this partnership as an effective model for promoting two-way communication. Officials from the HHS Grants.gov PMO told us they solicit information and feedback related to the functionality of Grants.gov through quarterly meetings and open forum-type sessions with FDP members. According to these officials, consistent communication with the FDP has enabled them to survey the community and determine appropriate improvements to the system to avoid undertaking inefficient or counterproductive revisions to the Grants.gov system. Likewise, a FDP official told us face-to-face meetings with grantor agency officials allow them to provide input on proposed changes to grants management policies and practices. In a second example, several state and local grantee association officials referred to the communication channels that were set up while implementing the Recovery Act as an example of effective two-way communication they would like to see replicated. In the same letter submitted to OMB after the creation of COFAR was announced, 10 organizations representing state and local officials referenced the constant and consistent communication OMB and the Recovery Board engaged in with members of the grantee community as a requirement for success. We have also previously reported that OMB and Recovery Board officials held weekly conference calls with state and local representatives to hear comments, concerns, and suggestions from them and share decisions. As a result of these calls, federal officials changed their plans and related guidance. This type of interaction was essential in clarifying federal intent, addressing questions, and establishing working relationships for the implementation efforts. However, several officials said these outreach efforts have dwindled, and they again feel OMB is not involving them in COFAR priority-setting discussions. Although the circumstances surrounding the Recovery Act were unusual in that there was a high level of funding available that had to be spent quickly, there are opportunities for COFAR to learn what communication strategies worked for agency officials and grantees, and apply those strategies. Another possible mechanism for improving communication with states and localities might be to use the Partnership Fund for Program Integrity Innovation (Partnership Fund) as a venue for federal policymakers to communicate and engage with the grantee community on proposed grants management reforms. Established by the 2010 Consolidated Appropriations Act, and administered by OMB, the Partnership Fund allows federal, state, local, and tribal agencies to pilot innovative ideas for improving assistance programs in a controlled environment. We previously reported that as part of implementing the Partnership Fund, OMB established a Federal Steering Committee, consisting of senior policy officials from federal agencies that administer benefits programs and formed the “Collaborative Forum.” The Collaborative Forum is made up of state representatives and stakeholder experts, including federal agencies, nongovernmental organizations, and others, who collaborate to generate, develop, and consult on potential pilot projects. The forum’s website, http://collaborativeforumonline.com, is used to hold discussions about potential projects and to share lessons and best practices among members. In a time of fiscal constraint, continuing to support the current scope and breadth of federal grants to state and local governments will be a challenge. Given this fiscal reality, it becomes more important to design and implement grants management policies that strike an appropriate balance between ensuring accountability for the proper use of federal funds without increasing the complexity and cost of grants administration for agencies and grantees. Duplicative, unnecessarily burdensome, and conflicting grants management requirements result in resources being directed to nonprogrammatic activities, which could prevent the cost- effective delivery of services at the local level. Streamlining and simplifying grants management processes is critical to ensuring that federal funds are reaching the programs and services Congress intended. In October 2011, OMB created COFAR and tasked it with overseeing the development of federal grants management policy. Although COFAR recently identified some priorities, it has not yet released to the public an implementation plan that includes performance targets, mechanisms to monitor, evaluate, and report on progress made towards stated goals, and goal leaders who can be held accountable for those goals. Although OMB officials provided us with some additional and updated information in their comment letter that we were unable to assess or validate, they agreed with our recommendations that OMB and COFAR need to develop an implementation schedule and mechanisms to monitor, evaluate and report on results, clarify roles and responsibilities for the various streamlining initiatives and engagement with federal stakeholders, and develop an effective two-way communication strategy that includes the grant recipient community. OMB officials acknowledged that more needs to be done to clarify roles and responsibilities and plans for moving forward with various streamlining initiatives. Moreover, stakeholders continue to express frustration about limited opportunities to provide feedback on proposed reforms. If grantees remain isolated from COFAR’s development of new grants management systems and policies, those systems and policies could be ineffective or require more resources to use. We recommend the Director of OMB, in collaboration with the members of COFAR, take the following three actions: 1. Develop and make publicly available an implementation schedule that includes performance targets, goal leaders who can be held accountable for each goal, and mechanisms to monitor, evaluate, and report on results. 2. Clarify the roles and responsibilities for various streamlining initiatives and steps for decision making, in particular how COFAR will engage with relevant grant-making agency stakeholders and utilize agency resources. 3. Improve efforts to develop an effective two-way communication strategy that includes the grant recipient community, smaller grant- making agencies that are not members of COFAR, and other entities involved with grants management policy. We provided a draft of this report to OMB, Education, GSA, HHS, and NSF for comment. NSF and HHS provided technical comments, which we incorporated as appropriate. In its written comments, OMB generally concurred with our findings and recommendations but also said there had been significant progress on the grants management streamlining process in recent months, including using a more detailed project plan internally to monitor progress made towards the priorities established for COFAR; making efforts to solicit the views of all federal agencies including town hall meetings, formal circulation of draft policies for comment prior to publication, and conference calls to share information on key issues; and using meetings, webinars, and teleconferences to inform a diverse cross section of stakeholder groups about the work that the COFAR is doing, and to get their feedback on upcoming policy changes. Because OMB only provided us with additional and updated information at the end of its comment period, we could neither verify nor validate it. However, we have incorporated OMB’s comments into the body of the report, as appropriate, in order to make our review as up-to-date as possible. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretaries of Education, and Health and Human Services; Administrator of GSA; Director of the National Science Foundation; the Director of the Office of Management and Budget and to appropriate congressional committees. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions or wish to discuss the material in this report further, please contact me at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in Appendix V. We were asked to examine federal grants management reform efforts. To accomplish this, we reviewed (1) what the Office of Management and Budget (OMB) and other federal grants governance bodies have done since the passage of P.L. 106-107 in 1999 to reform grants management processes and reduce unnecessary burdens on applicants, grantees, and federal agencies; and (2) what actions, if any, have been taken to address what we have found to be persistent management challenges, such as the lack of a comprehensive plan for implementing reforms, confusion over roles and responsibilities among grants governance bodies, and inconsistent two-way communication with stakeholders. To address both objectives, we reviewed P.L. 106-107; and OMB circulars and guidance such as OMB-12-01, “Creation of the Council on Financial Assistance Reform,” OMB A-102, “Grants and Cooperative Agreements With State and Local Governments,” and A-110, “Uniform Administrative Requirements for Grants and Other Agreements with Institutions of Higher Education, Hospitals and Other Non-Profit Organizations,” which describe administrative requirements for different types of grantees, and OMB’s February 2012 advanced notice of proposed guidance, which proposes several ideas for circular reforms. We also reviewed action plans created by former and current interagency councils with responsibility for overseeing grants management reforms, as well as our previous work and other literature on grants management initiatives and the related challenges that have undermined the government’s ability to simplify grants management processes, reduce unnecessary burden on applicants, grantees, and federal agencies, and improve delivery of services to the public. We also reviewed our previous work on collaborative mechanisms and management consolidation efforts. We interviewed officials from OMB who are involved with developing and implementing government-wide grants management policy; officials at the three agencies that served as consortia leads for the 2004 to 2012 Grants Management Line of Business (GMLOB) e-government initiative: the National Science Foundation (NSF), Health and Human Services (HHS), and the Department of Education; and officials at the agency that managed the Financial Management Line of Business (FMLOB) e- government initiative in 2012: the General Services Administration (GSA). To capture the perspective of grantor agencies, we spoke to officials from HHS, NSF, and the Department of Education in their grant-making and administration capacities. To understand grantee perspectives, we interviewed officials from grantee associations that represent a variety of grantee types including state and local governments, nonprofit organizations, and universities. To select the grantee associations that we interviewed, we relied on three data sources: 4. Our previous work on grant streamlining which included 31 grantee associations separated into four categories: state government, local and regional government, nonprofits, and tribal; 5. A list of grant associations included on the Grants.gov website; and 6. Additional grantee associations that have been active in grants- related topics in the past. We selected 16 grantee associations to contact. These associations represented a variety of grantee types from state and local government, nonprofit organizations, as well as associations representing grantees on crosscutting grants related issues. In addition, the associations could offer a historical perspective on federal efforts to streamline grants management. Of the 16 associations we contacted, 8 associations said they were knowledgeable about grants management reforms and could answer our questions. We interviewed officials at these 8 associations: National Association of State Auditors, Comptrollers, and Treasurers National Association of State Budget Officers National Association of Regional Councils National Association of Counties National Grants Management Association National Grants Partnership Federal Demonstration Partnership National Association of Chief Information Officers Two additional associations, Federal Funds Information for States and National Council of Nonprofits, sent us comments on grants management reforms in writing. We conducted this performance audit from July 2012 to May 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Content To address grants management issues, the act required the Office of Management and Budget (OMB) to direct, coordinate, and assist federal agencies in establishing common grants management systems, and simplifying their application, administrative, and reporting procedures with the goal of improved efficiency and delivery of services to the public. The law sunsetted in 2007. The Chief Financial Officers (CFO) Council established the GPC to implement P.L. 106-107. Composed of grants policy experts from across the federal government, the GPC oversaw the efforts of cross-agency work groups focusing on different aspects of grants management, recommended policies and practices to OMB, and coordinated related interagency activities. OMB replaced the GPC in 2011 with the Council on Financial Assistance Reform (COFAR). This board consisted of senior officials from federal grant-making agencies and provided strategic direction and oversight of Grants.gov, a grant identification and application portal. OMB coordinated grants management policy through the board and the GPC until October 2011, when OMB announced that COFAR would replace both of these federal grant bodies. In response to P.L. 106-107, OMB created Grants.gov, a central grant identification and application website for federal grant programs. The Grants.gov oversight and management structure includes HHS, the managing partner agency, the Grants.gov Program Management Office, which is housed within HHS and responsible for day-to-day management, and formerly the GEB which provided leadership and resources. The GPC was also involved because of its role in streamlining pre-award policies and implementing P.L. 106-107. Established to support the development of a government-wide solution to support end to-end grants management activities that promote citizen access, customer service, and agency financial and technical stewardship. In 2005, OMB selected the Department of Health and Human Services (HHS) and the National Science Foundation (NSF) to jointly lead the effort. Later, NSF took over the leadership role. In fiscal year 2012, it became part of the Financial Management Line of Business. Transparency This act required OMB to establish a free, publicly accessible website containing data on federal awards and subawards. OMB began providing data on federal awards on USAspending.gov in December 2007 and phased in reporting on subawards in 2010. Transparency Congress and the administration built provisions (such as quarterly use and outcome reporting) into the Recovery Act to increase transparency and accountability over spending. The Recovery Act called for a website (Recovery.gov) for the public to access reported data. A second website (FederalReporting.gov) was established so grant recipients could report their data. The Recovery Act also established the Recovery Accountability and Transparency Board to coordinate and conduct oversight of funds distributed under the act in order to prevent fraud, waste, and abuse. Type Transparency This board, established by an executive order, provides strategic direction for enhancing the transparency of federal spending and advance efforts to detect and remediate fraud, waste, and abuse in federal programs. It is charged to work closely with the existing Recovery Board to extend its successes and lessons learned to all federal spending. This council replaced the GPC and GEB in October 2011. OMB charged COFAR with identifying emerging issues, challenges, and opportunities in grants management and policy and providing recommendations to OMB on policies and actions to improve grants administration. COFAR is also expected to serve as a clearinghouse of information on innovations and best practices in grants management. COFAR is made up of the OMB Controller and the Chief Financial Officers from the largest eight grant-making agencies and one of the smaller federal grant-making agencies. The latter serves a rotating 2-year term. In February 2012, OMB published an advanced notice of proposed guidance detailing a series of reform ideas that would standardize information collection across agencies, adopt a risk-based model for single audits, and provide new administrative approaches for determining and monitoring the allocation of federal funds. After receiving more than 350 public comments on its advanced notice of proposed guidance, OMB published its circular reform proposal in February 2013, and plans to implement the reforms by December 2013. To improve the timeliness of grant close out and reduce undisbursed balances, the Controller of OMB issued a “Controller Alert” to all federal chief financial officers instructing agencies to take appropriate action to closeout grants in a timely manner. It provided a number of strategies such as establishing annual performance targets for timely grant close out. In addition to the contact named above, Thomas M. James, Assistant Director, and Elizabeth Hosler, and Jessica Nierenberg, Analysts-in- Charge, supervised the development of this report. Travis P. Hill, Melanie Papasian, and Carol Patey made significant contributions to all aspects of this report. Elizabeth Wood assisted with the design and methodology, Amy Bowser provided legal counsel, Donna Miller developed the report’s graphics, and Susan E. Murphy and Sandra L. Beattie verified the information in this report. Other important contributors included Beryl Davis, Kim McGatlin, Joy Booth, and James R. Sweetman, Jr. | GAO has previously identified several management challenges that have hindered grants management reform efforts. GAO was asked to review recent federal grants management reform efforts. GAO reviewed (1) what OMB and other federal grants governance bodies have done since the passage of P.L. 106-107 to reform grants management processes, and (2) what actions, if any, have been taken to address what GAO has found to be persistent management challenges. GAO reviewed relevant legislation, OMB circulars and guidance, action plans of interagency councils responsible for overseeing grants management reforms, and previous GAO work and other literature on grants management reforms. GAO also reviewed its previous work on collaborative mechanisms and management consolidation efforts. GAO also interviewed officials from OMB, grant-making agencies, and associations representing a variety of grantee types. In the past 14 years, since the passage of the Federal Financial Assistance Management Improvement Act of 1999 (P.L. 106-107), there has been a series of legislative- and executive-sponsored initiatives aimed at reforming aspects of the grants management life cycle. Recently, a new grants reform governance body, the Council on Financial Assistance Reform (COFAR), replaced two former federal boards--the Grants Policy Committee (GPC) and Grants Executive Board (GEB). The Office of Management and Budget (OMB) created COFAR and charged it with identifying emerging issues, challenges, and opportunities in grants management and policy and providing recommendations to OMB on policies and actions to improve grants administration. In addition to this new governance structure, OMB and other entities involved with federal grants management are overseeing several ongoing reform initiatives intended to address the challenges grantees encounter throughout the grants life cycle. These initiatives include consolidating and revising grants management circulars, simplifying the pre-award phase, promoting shared information technology (IT) solutions such as the development of shared end-to-end grants management systems, and improving the timeliness of grant close out and reducing undisbursed balances. Management and coordination challenges could hinder the progress of some of these initiatives. For example, although promoting shared IT solutions for grants management--an original goal of P.L. 106-107--remains a priority, there has been uncertainty regarding the status of this initiative and future plans for it. The lead agency for this initiative changed several times since 2012, and it has been unclear at times whether promoting shared IT systems for grants management would continue to be a priority, and if so, which agency was in charge. After receiving GAO's draft report for review, OMB issued a "Controller Alert" on April 29, 2013, announcing that the Department of the Treasury would lead efforts to transform federal financial management by, among other things, relying on common standards, shared services, and using state-of-the-art technology. Although COFAR has recently identified several high-level priority goals for 2013 through 2015, it faces some of the same management challenges identified in previous GAO reports on grants management, such as the lack of a comprehensive plan for implementing reforms, confusion over roles and responsibilities among grants governance bodies, and inconsistent communication and outreach to the grantee community. COFAR has not yet released to the public an implementation plan that includes key elements such as performance targets and goal leaders for each goal, and mechanisms to monitor, evaluate, and report on progress made toward stated goals. Furthermore, agencies involved with current grants management reforms are not always clear on their roles and responsibilities for various streamlining initiatives which may cause such initiatives to languish. Finally, GAO found that members of the grant recipient community continue to voice concern because they do not see a role for themselves as OMB and COFAR develop priorities for reforming federal grants management. In the comments it provided on April 29, 2013, OMB described actions it is taking to address these challenges, such as using a more detailed project plan internally and scheduling outreach events with federal partners and members of the grantee community. GAO recommends that the Director of OMB: (1) develop and make publiclyavailable an implementation schedule that includes performance targets, goal leaders who can be held accountable for each goal, and mechanisms to monitor, evaluate, and report on results; (2) clarify the roles and responsibilities for various streamlining initiatives; and (3) develop an effective two-way communication strategy with relevant stakeholders. OMB generally concurred with our recommendations and provided additional and updated information, which was incorporated into the report as appropriate. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The permanent provisions of the Brady Handgun Violence Prevention Act took effect on November 30, 1998. Under the Brady Act, before a federally licensed firearms dealer can transfer a firearm to an unlicensed individual, the dealer must request a background check through NICS to determine whether the prospective firearm transfer would violate federal or state law. The Brady Act’s implementing regulations also provide for conducting NICS checks on individuals seeking to obtain permits to possess, acquire, or carry firearms. According to the Department of Justice, under current law, inclusion on a terrorist watch list is not a stand- alone factor that would prohibit a person from receiving or possessing a firearm. Thus, if no other federal or state prohibitors exist, a known or suspected terrorist can legally purchase firearms. Approximately 8.5 million background checks are run through NICS each year, of which about one-half are processed by the FBI’s NICS Section and one-half by designated state and local criminal justice agencies. Under federal and state requirements, prospective firearms purchasers must provide information that is needed to initiate a NICS background check. For example, in order to receive a firearm from a licensed dealer, federal regulations require an individual to complete a Firearms Transaction Record (ATF Form 4473). Among other things, this form requires prospective purchasers to provide the following descriptive data: name, residence address, place of birth, height and weight, sex, date of birth, race, state of residence, country of citizenship, and alien registration number (for non-U.S. citizens). A Social Security number is optional. Firearms dealers use the Form 4473 to record information about the firearms transaction, including the type of firearm(s) to be transferred (e.g., handgun or long gun); the response provided by the FBI’s NICS Section or state agency (e.g., proceed or denied); and information specifically identifying each firearm to be transferred (e.g., manufacturer, model, and serial number), which shows whether the transaction involves the purchase of multiple firearms. Individuals applying for state permits to possess, acquire, or carry firearms also are required to provide personal descriptive data on a state permit application. State laws vary in regard to the types of information required from permit applicants. The purpose of the NICS background check is to search for the existence of a prohibitor that would disqualify a potential buyer from purchasing a firearm pursuant to federal or state law. During the NICS check, descriptive data provided by an individual—such as name and date of birth—are used to search databases containing criminal history and other records supplied by federal, state, and local agencies. One of the databases searched by NICS is the FBI’s National Crime Information Center database, which contains criminal justice information (e.g., names of persons who have outstanding warrants) and also includes records on persons identified as known or suspected members of terrorist organizations. The terrorist-related records are maintained in the National Crime Information Center’s Violent Gang and Terrorist Organization File (VGTOF), which was designed to provide law enforcement personnel with the means to exchange information on members of violent gangs and terrorist organizations. Although NICS checks have included searches of terrorist records in VGTOF, NICS personnel at the FBI and state agencies historically did not receive notice when there were hits on these records. The FBI blocked the VGTOF responses (i.e., the responses were not provided to NICS personnel) under the reasoning that VGTOF records contain no information that would legally prohibit the transfer of a firearm under federal or state law. However, in November 2002, the FBI began an audit of NICS transactions where information indicated the individual was an alien, including transactions involving VGTOF records. In one instance involving a VGTOF record, the audit revealed that an FBI field agent had knowledge of prohibiting information not yet entered into the automated databases checked by NICS. As a result, in November 2003, the Department of Justice—citing Brady Act authorities—directed the FBI to revise NICS procedures to better ensure that subjects of VGTOF records who have disqualifying factors do not receive firearms in violation of applicable federal or state law. Specifically, the Brady Act authority cited allows the FBI up to 3 business days to check for information demonstrating that a prospective buyer is prohibited by law from possessing or receiving a firearm. Under revised procedures effective February 3, 2004, FBI and state personnel who handle NICS transactions began receiving notice of transactions that hit on VGTOF records. Also, under the revised procedures, all NICS transactions with potential or valid matches to VGTOF records are automatically delayed to give NICS personnel the chance to further research the transaction before a response (e.g., proceed or denied) is given to the initiator of the background check. For all potential or valid matches with terrorist records in VGTOF, NICS personnel are to begin their research by contacting the Terrorist Screening Center (TSC) to verify that the subject of the NICS transaction matches the subject of the VGTOF record, based on the name and other descriptors. For confirmed matches, NICS personnel are to determine whether federal counterterrorism officials (e.g., FBI field agents) are aware of any information that would prohibit the individual by law from receiving or possessing a firearm. For example, FBI field agents could have information not yet posted to databases checked by NICS showing the person is an alien illegally or unlawfully in the United States. If counterterrorism officials do not provide any prohibiting information, and there are no other records in the databases checked by NICS showing the individual to be prohibited, NICS personnel are to advise the initiator of the background check that the transaction may proceed. If the NICS background check is not completed within 3 business days, the gun dealer may transfer the firearm (unless state law provides otherwise). Designated state and local criminal justice agencies are responsible for conducting background checks in accordance with NICS policies and procedures. However, the Attorney General and the FBI ultimately are responsible for managing the overall NICS program. Thus, the FBI’s Criminal Justice Information Services Division conducts audits of the states’ compliance with federally established NICS regulations and guidelines. Also, the FBI is a lead U.S. law enforcement agency responsible for investigating terrorism-related matters. During presale screening of prospective firearms purchasers, NICS searches terrorist watch list records generated by numerous federal agencies, including components of the Departments of Justice, State, and Homeland Security. Applicable records are consolidated by TSC, which then makes them available for certain uses or purposes, such as inclusion in VGTOF—a database routinely searched during NICS background checks. Terrorist watch lists are maintained by numerous federal agencies. These lists contain varying types of data, from biographical data—such as a person’s name and date of birth—to biometric data—such as fingerprints. Our April 2003 report identified 12 terrorist or criminal watch lists that were maintained by nine federal agencies. Table 1 shows the 12 watch lists and the current agencies that maintain them. At the time we issued our April 2003 report, federal agencies did not have a consistent and uniform approach to sharing terrorist watch list information. TSC was established in September 2003 to consolidate the government’s approach to terrorism screening and provide for the appropriate and lawful use of terrorism information. In addition to consolidating terrorist watch list records, TSC serves as a single point of contact for law enforcement authorities requesting assistance in the identification of subjects with possible ties to terrorism. TSC has access to supporting information behind terrorist records and can help resolve issues regarding identification. TSC also coordinates with the FBI’s Counterterrorism Division to help ensure appropriate follow-up actions are taken. TSC receives the vast majority of its information about known or suspected terrorists from the Terrorist Threat Integration Center, which assembles and analyzes information from a wide range of sources. In addition, the FBI provides TSC with information about purely domestic terrorism (i.e., activities having no connection to international terrorism). According to TSC officials, from December 1, 2003—the day TSC achieved an initial operating capability—to March 12, 2004, TSC consolidated information from 10 of the 12 watch lists shown in table 1 into a terrorist- screening database. The officials noted that the database has routinely been updated to add new information. Further, TSC officials told us that information from the remaining 2 watch lists—the U.S. Immigration and Customs Enforcement’s Automated Biometric Identification System and the FBI’s Integrated Automated Fingerprint Identification System—will be added to the consolidated database at a future date not yet determined. A provision in the Intelligence Authorization Act for Fiscal Year 2004 required the President to submit a report to Congress by September 16, 2004, on the operations of TSC. Among other things, this report was to include a determination of whether the data from all the watch lists enumerated in our April 2003 report have been incorporated into the consolidated terrorist-screening database; a determination of whether there remain any relevant databases not yet part of the consolidated database; and a schedule setting out the dates by which identified databases—not yet part of the consolidated database—would be integrated. As of November 2004, the report on TSC operations had not been submitted to Congress. TSC, through the participation of the Departments of Homeland Security, Justice, and State and intelligence community representatives, determines what information in the terrorist-screening database will be made available for which types of screening purposes. In November 2003, the Department of Justice directed the FBI’s NICS Section to develop appropriate procedures for NICS searches of TSC records when the center and its consolidated watch list database were established and operational. In accordance with this directive, the FBI and TSC have implemented procedures that allow all eligible records in the center’s consolidated terrorist-screening database to be added to VGTOF and searched during NICS background checks. According to FBI and TSC officials, since December 2003, eligible records from the terrorist- screening database have been added to VGTOF and searched during NICS background checks. For the period February 3 through June 30, 2004, FBI data and our interviews with state agency officials indicated that 44 NICS transactions resulted in valid matches with terrorist records in VGTOF. Of this total, 35 transactions were allowed to proceed because the background checks found no prohibiting information, such as felony convictions or illegal immigrant status, as shown in table 2. According to FBI data and our interviews with state agency officials, the 44 total valid matches shown in table 2 involved 36 different individuals (31 individuals had one match and 5 individuals had more than one match). We could not determine whether the 5 individuals with more than one match had actually attempted to purchase firearms or acquire firearms permits on separate occasions, in part because information related to applicable NICS records was not available due to legal requirements for destroying information on transactions that are allowed to proceed. Our work indicated that the multiple transactions could have, for example, been run for administrative purposes (e.g., rechecks). The FBI’s revised procedures for handling NICS transactions with valid matches to terrorist watch list records—i.e., to delay the transactions to give NICS personnel the chance to further research for prohibitors—have successfully resulted in the denial of firearms transactions involving known or suspected terrorists who have disqualifying factors. Specifically, two of the six denied transactions shown in table 2 were based on prohibiting information provided by FBI field agents that had not yet been entered in automated databases checked by NICS. According to agency officials in the two states that handled the transactions, FBI field agents provided information showing that one of the individuals was judged to be mentally defective and the other individual was an alien illegally or unlawfully in the United States. Based on this information, both firearm transfers were denied. The vast majority of NICS transactions that generated initial hits on terrorist records in VGTOF did not result in valid matches. Specifically, during the period in which the 44 valid matches were identified—February 3 through June 30, 2004—officials from the FBI’s NICS Section estimated that approximately 650 NICS transactions generated initial hits on terrorist records in VGTOF. The high rate of potential matches returned—i.e., VGTOF records returned as potential matches based upon the data provided by the prospective purchaser—is due to the expanded search parameters used to compare the subject of a background check with a VGTOF record. An FBI NICS Section official told us that by comparing data from the NICS transaction (e.g., name, date of birth, and Social Security number) with data from the VGTOF record, it generally is easy to determine if there is a potential or valid match. The official told us that NICS personnel drop the false hits from further consideration and follow up only on transactions considered to be potential or valid matches. A false hit, for example, could occur when the subject of a NICS transaction and the subject of a VGTOF record have the same or a similar name but a different date of birth and Social Security number. As table 2 shows, the 44 NICS transactions with valid matches to terrorist records in VGTOF were processed by the FBI’s NICS Section and 11 states during the period February 3 through June 30, 2004. In December 2004, FBI officials told us that during the 4 months following June 2004—that is, during July through October 2004—the FBI’s NICS Section handled an additional 14 transactions with valid matches to terrorist records in VGTOF. Of the 14 transactions with valid matches, FBI officials told us that 12 were allowed to proceed because the background checks found no prohibiting information, and 2 were denied based on prohibiting information. It was beyond the scope of our work to assess the reliability or accuracy of the additional data. Federal and state procedures—developed and disseminated under the Department of Justice’s direction—contain general guidelines that allow FBI and state personnel to share information from NICS transactions with federal counterterrorism officials, in the pursuit of potentially prohibiting information about a prospective gun buyer. However, the procedures do not address the specific types of information that can or should be provided or the sources from which such information can be obtained. Justice’s position is that the types of information that can be routinely provided generally are limited to the information contained within the NICS database. Justice noted, however, that NICS personnel can request additional information from a gun dealer or from a law enforcement agency processing a firearms permit application, if that information is requested by a counterterrorism official in the legitimate pursuit of establishing a match between the prospective gun buyer and a VGTOF record. Most state personnel told us that—at the request of counterterrorism officials—the state would contact the gun dealer or refer to the state permit application to obtain and provide all available information related to a NICS transaction. FBI counterterrorism officials told us that receiving all available personal identifying information and other details from terrorism-related NICS transactions could be useful in conducting investigations. As mentioned previously, for all potential or valid matches with terrorist records in VGTOF, NICS personnel are to begin their research by contacting TSC to verify the match. According to the procedures used by the FBI’s NICS Section, during the screening process, TSC will ask NICS staff to provide “all information available in the transaction,” including the location of the firearms dealer, in the pursuit of identifying a valid match. If a coordinated effort by TSC and FBI NICS Section staff determines that the subject of the NICS transaction appears to match a terrorist record in VGTOF—based on the name and other descriptors—TSC is to refer the NICS Section staff to the FBI’s Counterterrorism Division for follow-up. Further, the procedures note that there will be instances when NICS Section staff are contacted directly by a case agent, who will ask the NICS Section staff to share “additional information from the transaction or provide necessary information to complete the transaction.” The Department of Justice’s position is that information from the NICS database is not to be used for general law enforcement purposes. Justice noted, however, that information about a NICS transaction can be shared with law enforcement agents or other government agencies in the legitimate pursuit of establishing a match between the prospective gun buyer and a VGTOF record and in the search for information that could prohibit the firearm transfer. Justice explained that the purpose of NICS is to determine the lawfulness of proposed gun transactions, not to provide law enforcement agents with intelligence about lawful gun purchases by persons of investigative interest. Thus, Justice told us that as set forth in NICS procedures, all information about a transaction hitting on a VGTOF record can be shared with field personnel in the pursuit of establishing whether the person seeking to buy the gun is the same person with the terrorist record in VGTOF. Justice added that this is done during the search for prohibiting information about the person whose name hit on the VGTOF record. Further, Justice noted that information about NICS transactions also can be and routinely is shared by NICS with law enforcement agencies when the information indicates a violation, or suspected violation, of law or regulation. According to Justice, the types of information that can be routinely shared under NICS procedures generally are limited to the information collected by or contained within the NICS database. Specifically, Justice noted that—in verifying a match and determining whether prohibiting information exists—the following information can be routinely shared with TSC and counterterrorism officials: certain biographical data from the ATF Form 4473 collected from a gun dealer for purposes of running a NICS check (e.g., name, date of birth, race, sex, and state of residence); the specific date and time of the transaction; the name, street address, and phone number of the gun dealer; and the type of firearm (e.g., handgun or long gun), if relevant to helping confirm identity. Justice told us that additional information contained in the ATF Form 4473, such as residence address or the number and make and model of guns being sold, is not required or necessary to run a NICS check. Justice noted, however, that there are times when NICS personnel will contact a gun dealer and request a residence address on a person who is determined to be prohibited from purchasing firearms—such as when there is a hit on a prohibiting arrest warrant record—so that the information can be supplied to a law enforcement agency to enforce the warrant. Similarly, Justice told us that NICS procedures do not prohibit NICS personnel from requesting a residence address from a gun dealer—or from a law enforcement agency issuing a firearms permit in the case of a permit check—if that information is requested by a counterterrorism official in the pursuit of establishing a match between the gun buyer and the VGTOF record. Justice noted that gun dealers are not legally obligated under either NICS or ATF regulations to provide this information to NICS personnel but frequently do cooperate and provide the residence information when specifically requested by NICS personnel. Further, Justice told us that in cases in which a match is established and the field does not have the residence address or wants the address or other additional information on the Form 4473 regarding a “proceeded” transaction, FBI personnel can then coordinate with ATF to request the information from the gun dealer’s records without a warrant. Specifically, Justice cited provisions in the Gun Control Act of 1968, as amended, that give the Attorney General the authority to inspect or examine the records of a gun dealer without a warrant “in the course of a reasonable inquiry during the course of a criminal investigation of a person or persons other than the licensee.” Justice explained that unless the person is prohibited or there is an indication of a violation or potential violation of law, FBI NICS personnel do not perform this investigative function for the field. FBI field personnel can, however, get the investigative information from gun dealers through coordination with ATF. We recognize that current procedures allow NICS personnel to share “all information available in the transaction” with TSC or counterterrorism officials, in the pursuit of identifying a true match and the discovery of information that is prohibiting. However, given Justice’s interpretation, we believe that clarifying the procedures would help ensure that the maximum amount of allowable information from terrorism-related NICS transactions is consistently shared with counterterrorism officials. For example, under current procedures, it is not clear if the types of information that can or should be routinely shared are limited to the information contained within the NICS database or if additional information can be requested from the gun dealer or from the law enforcement agency processing a permit application. The FBI’s NICS Section did not maintain data on the types of information it shared with TSC or counterterrorism officials to (1) verify matches between NICS transactions and VGTOF records or (2) pursue the existence of firearm possession prohibitors. According to the NICS Section, such data are not maintained because NICS procedures provide for the sharing of all information available from the transaction, including the location of the gun dealer, in the pursuit of identifying a true match. The NICS Section told us that data required to initiate a NICS check—such as name, date of birth, sex, race, state of residence, citizenship, and purpose code (e.g., firearm check or permit check)—are captured in the NICS database and shared on every NICS transaction. A NICS Section official told us that the specific or approximate date and time of each transaction also is consistently shared with TSC. TSC did maintain data on the types of information shared by the NICS Section. Specifically, in verifying matches, TSC data showed that NICS Section staff shared basic identifying information about the prospective purchasers (e.g., name, date of birth, and Social Security number). However, TSC data showed that NICS Section staff did not consistently share the specific location or phone number of the gun dealer. According to the procedures used by the FBI’s NICS Section, in the pursuit of identifying a valid match, TSC will ask NICS staff to provide the location of the gun dealer. The NICS Section told us that this includes the specific location and phone number of the gun dealer. According to TSC officials, once the FBI’s NICS Section has shared information on an identity match and TSC verifies the match, the information provided by the NICS Section is forwarded to the FBI’s Counterterrorism Division. The Counterterrorism Division is to then contact the NICS Section to follow up on the match. If the NICS Section does not receive a response from the Counterterrorism Division, the NICS Section is to aggressively pursue contacting the division to resolve the transaction. Counterterrorism Division officials told us the information provided by the NICS Section is routinely shared with field agents familiar with the terrorist records in VGTOF. NICS Section officials also told us that for each transaction with a valid match to a VGTOF record, NICS Section staff talked directly to a field agent to pursue prohibiting information. The NICS Section did not maintain data on what, if any, additional information from the NICS transactions was shared during these discussions. However, NICS Section officials told us that in no cases did NICS staff contact the gun dealer to obtain—and provide to counterterrorism officials—additional information about the firearm transaction (e.g., information such as the prospective purchaser’s residence address) that was not submitted as part of the initial NICS check or already contained within NICS. The NICS Section was aware of one instance in which NICS staff was asked by a counterterrorism official to obtain address information to assist in determining whether a VGTOF hit was a valid match. In that case— involving a firearm permit check—the NICS staff was able to get residence address information from the law enforcement agency processing the permit application and provide it to the counterterrorism official. According to the FBI-disseminated procedures used by state agencies, in the process of contacting TSC, state staff are to share “all information available in the transaction,” including the location of the firearms dealer, in the pursuit of identifying a true match and determining the existence of prohibiting information. If TSC and state staff make an identity match, TSC is to refer the state staff to the FBI’s Counterterrorism Division for follow-up. Unlike the procedures used by the FBI’s NICS Section, the state agency procedures do not address whether there will be instances when state staff are to be contacted directly by a case agent, or what additional information from the NICS transaction could be shared during such contacts. Most state agency officials we contacted told us they interpreted the procedures as allowing them to share all available information related to a NICS transaction requested by counterterrorism officials, including any information contained on the forms used to purchase firearms or apply for firearms permits. Also, most state agency officials told us they were not aware of any restrictions or specific FBI guidance on the types of information that could or could not be shared with counterterrorism officials. According to the FBI’s NICS Section, the procedures used by state agencies note that in the process of contacting TSC, state staff will share all information available in the transaction in the pursuit of identifying a true match and the discovery of information that is prohibiting. As mentioned previously, we believe that clarifying the procedures would help ensure that the maximum amount of allowable information from terrorism-related NICS transactions is consistently shared with counterterrorism officials. The state agencies we contacted did not maintain data on the types of information they shared with TSC or counterterrorism officials to verify matches between NICS transactions and VGTOF records or pursue prohibiting information. However, in verifying matches, TSC data showed that state agency staff shared basic identifying information about the prospective purchasers (e.g., name, date of birth, and Social Security number). TSC data also showed that state agency staff did not consistently share the specific location or phone number of the gun dealer. TSC officials told us they basically can identify the date and time of a firearm transaction because TSC records the date and time NICS staff call TSC, which occurs very shortly after the gun dealer initiates the NICS check. TSC and FBI Counterterrorism Division officials told us they handle state agency referrals the same way as they handle referrals from the FBI’s NICS Section. Most of the state agency officials we contacted told us that if requested by counterterrorism officials (e.g., FBI field agents), state agency staff would either call the gun dealer or refer to the state permit application to obtain and provide all available information related to a NICS transaction. This information could include the prospective purchaser’s residence address and the type and number of firearms involved in the transaction. Officials in three states told us that state staff had shared the prospective purchaser’s residence address with FBI field agents. In one of the three cases, the field agent was interested in the residence address because the individual was in the country illegally and was wanted for deportation. In its written comments on a draft of this report, Justice noted that in the case of the individual who was in the country illegally, because the individual was a prohibited person, there was no restriction on obtaining and providing the additional information about the denied transaction to a law enforcement agency after the identity was already established. Justice also noted that regarding the sharing of information from state firearm permit applications, there is no Brady Act limitation on the state supplying transaction information to field agents for investigative purposes after identity is established, as the use and dissemination of state firearm permit information is governed by state law. According to officials from the FBI’s Counterterrorism Division, personal identifying information and other details about NICS transactions with valid matches to terrorist records in VGTOF could be useful to FBI field agents in conducting terrorism investigations. Specifically, the officials noted the potential usefulness of locator information, such as the prospective purchaser’s residence address, the date and time of the transaction, and the specific location of the gun dealer at which the transaction took place. The officials also told us that information on the type of firearm(s) involved in the transaction and whether the transaction involved the purchase of multiple firearms could also be useful to field agents. According to one official, in general, agents would want as much information as possible that could assist investigations. The FBI’s NICS Section noted, however, that NICS procedures provide for sharing information only when it is relevant to determining a true match between a NICS transaction and a terrorist record in VGTOF. Although the Attorney General and the FBI ultimately are responsible for managing NICS, the FBI has not routinely monitored the states’ handling of terrorism-related background checks. For example, the FBI does not know the number and results of terrorism-related NICS transactions handled by state agencies since June 30, 2004. Also, the FBI has not routinely assessed the extent to which applicable state agencies have implemented and followed procedures for handling NICS transactions involving terrorist records in VGTOF. The FBI’s plans call for conducting audits of the states’ compliance with the procedures every 3 years. Our work revealed several issues state agencies have encountered in handling NICS transactions involving terrorist records in VGTOF, including delays in implementing procedures and a mishandled transaction. The FBI has not routinely monitored the states’ handling of NICS transactions involving terrorist records in VGTOF. For example, in response to our request for information—covering February 3 through June 30, 2004—the FBI’s NICS Section reviewed all state NICS transactions that hit on VGTOF records during this period to identify potential matches. We used this information to follow up with state agencies and create table 2 in this report. However, since June 30, 2004, the FBI’s NICS Section has not tracked or otherwise attempted to collect information on the number of NICS transactions handled by state agencies that have resulted in valid matches with terrorist records in VGTOF or whether such transactions were approved or denied. NICS Section officials told us that while the NICS Section does not have aggregate data, FBI officials at TSC and the FBI’s Counterterrorism Division are aware of valid-match transactions that state agencies handle. Given the significance of valid matches, we believe it would be useful for the FBI’s NICS Section to have aggregate data on the number and results of terrorism-related NICS transactions handled by state agencies, particularly if the data indicate that known or suspected terrorists may be receiving firearms. In response to our inquiries, in October 2004, Justice and FBI NICS Section officials told us they plan to study the need for information on state NICS transactions with valid matches to terrorist records in VGTOF and the means by which such information could be obtained. Also, while the FBI has taken steps to notify state agencies about the revised procedures for handling NICS transactions involving VGTOF records—including periodic teleconferences and presentations at a May 2004 NICS User Conference—the FBI has not routinely assessed the extent to which states have implemented and followed the procedures. According to the FBI, the NICS Section performed an assessment of all NICS transactions involving VGTOF records from February 3, 2004 (the day the block on VGTOF records was removed) to March 22, 2004, in order to assess the extent to which the states implemented and followed procedures. For example, a NICS Section official told us that NICS personnel called state agencies to make sure they contacted TSC to verify matches and also contacted counterterrorism officials to pursue prohibiting information. However, according to the NICS Section, the assessment concluded on March 23, 2004, because NICS Section personnel could not fully assess the reliability or accuracy of the information provided by the states. Officials from two states told us that additional FBI oversight could help ensure that applicable procedures are followed. One of the state officials told us that such FBI oversight could be particularly important since NICS transactions with valid matches to VGTOF records are rare and there could be turnover of state personnel who process the transactions. As part of routine state audits the FBI conducts every 3 years, the FBI plans to assess the states’ handling of terrorism-related NICS transactions. Specifically, every 3 years, the FBI plans to audit whether designated state and local criminal justice agencies are utilizing the written procedures for processing NICS transactions involving VGTOF records. Moreover, for states with a decentralized structure for processing NICS transactions— i.e., states with multiple local law enforcement entities that conduct background checks (rather than one central agency)—the goal of the audit is to determine if local law enforcement agencies conducting the checks have in fact received the written procedures, and if so, whether the procedures are being followed. However, given that the relevant NICS transactions involve known or suspected terrorists who could pose homeland security risks, we believe that a 3-year audit cycle is not sufficient. Also, under a 3-year audit cycle, information from NICS transactions with valid matches to terrorist records in VGTOF may have been destroyed pursuant to federal or state requirements and therefore may not be available for review. Further, a 3-year audit cycle may not be sufficient help ensure the timely identification and resolution of issues state agencies may encounter in handling terrorism-related NICS transactions. State agencies have encountered several issues in handling NICS transactions involving terrorist records in VGTOF. Specifically, of the 11 states we contacted, 9 states experienced one or more of the following issues: 4 states had delays in implementing procedures, 3 states questioned whether state task forces were notified, 2 states had problems receiving responses from FBI field agents, 1 state mishandled a transaction, and 3 states raised concerns about notifications. Four of the 11 states we contacted had delays of 3 months or more in implementing NICS procedures for processing transactions that hit on VGTOF records—procedures that were to have been effective on February 3, 2004. Each of the 4 states processed one NICS transaction with a valid match to terrorist records in VGTOF before becoming aware of and implementing the new procedures. In processing the transactions, our work indicated that at least 3 of the 4 states did not contact TSC, as required by the procedures. The fourth state did not have information on how the transaction was processed. Although our work indicated that the FBI provided the new procedures to state agencies in January 2004, 1 of the 4 states did not implement the procedures until after a state official attended the May 2004 NICS User Conference. Officials in the other 3 states were not aware of the new procedures at the time we made our initial contacts with them in June 2004 (2 states) and August 2004 (1 state). Subsequent discussions with officials in 2 of the 3 states indicated the new procedures have been implemented. In November 2004, an official in the third state told us the procedures had not yet been implemented. Officials in 3 of the 11 states told us they believed their respective state’s homeland security or terrorism task forces should be notified when a suspected terrorist attempts to purchase a firearm in their state, but the officials said they did not know if TSC or the FBI provided such notices. Officials from the FBI’s Counterterrorism Division did not know the extent to which FBI field agents notified state and local task forces about terrorism-related NICS transactions, but the officials told us that such notifications likely are made on a need-to-know basis. Justice and FBI officials acknowledged that this issue warrants further consideration. Officials in 2 of the 11 states told us that in the pursuit of prohibiting information, their respective states had problems receiving responses from FBI field agents. These problems led to delays in each state’s ability to resolve one NICS transaction with a valid match to a terrorist record in VGTOF. According to state officials, under the respective state’s laws, the two transactions were not allowed to proceed during the delays, even though prohibiting information had not been identified. The two transactions were resolved as follows: In response to our inquiries, in November 2004, an analyst in one of the states contacted an FBI field agent, who told the analyst that the subject of the background check had been removed from VGTOF. A state official told us the NICS transaction was in a delay status for nearly 10 months. Regarding the other state, the NICS transaction was in an unresolved status for a period of time specified by state law, after which it was automatically denied. According to state officials, a state analyst made initial contact with an FBI field agent, who said he would call the analyst back. The state officials told us that the analyst made several follow-up calls to the agent without receiving a response. As of November 2004, the FBI had not responded to our request for information regarding the issues or circumstances as to why the FBI field agents had not contacted the two states’ analysts. One of the 11 states mishandled a NICS transaction with a valid match to a terrorist record in VGTOF. Specifically, although the state received notification of the VGTOF hit, the information was not relayed to state staff responsible for processing NICS transactions. Consequently, the transaction was approved without contacting TSC or FBI counterterrorism officials. We informed the state that the FBI’s NICS Section had identified the transaction as matching a VGTOF record. Subsequently, state personnel contacted TSC and an FBI field agent, who determined that prohibiting information did not exist. State officials told us that to help prevent future oversights, the state has revised its internal procedures for handling NICS transactions that hit on VGTOF records. Officials in 3 of the 11 states told us that the automatic (computer- generated) notification of NICS transactions that hit on a certain (sensitive) category of terrorist records in VGTOF is not adequately visible to system users and could be missed by state personnel processing NICS transactions. The FBI has taken steps to address this issue and plans to implement computer system enhancements in June 2005. Under revised procedures effective February 3, 2004, all NICS transactions with potential or valid matches to terrorist watch list records in VGTOF are automatically delayed to give NICS personnel at the FBI and applicable state agencies an opportunity to further research the transactions for prohibiting information. The primary purpose of the revised procedures is to better ensure that known or suspected members of terrorist organizations who have disqualifying factors do not receive firearms in violation of federal or state law. An additional benefit has been to support the nation’s war against terrorism. Thus, it is important that the maximum amount of allowable information from these background checks be consistently shared with counterterrorism officials. However, our work revealed that federal and state procedures for handling terrorism-related NICS transactions do not clearly address the specific types of information that can or should be routinely provided to counterterrorism officials or the sources from which such information can be obtained. For example, under current procedures, it is not clear if certain types of potentially useful information, such as the residence address of the prospective purchaser, can or should be routinely shared. Also, under current procedures, it is not clear if FBI and state personnel can routinely call a gun dealer or a law enforcement agency processing a permit application to obtain and provide counterterrorism officials with information not submitted as part of the initial NICS check. Further, some types of information—such as the specific location of the dealer from which the prospective purchaser attempted to obtain the firearm—have not consistently been shared with counterterrorism officials. Consistently sharing the maximum amount of allowable information could provide counterterrorism officials with valuable new information about individuals on terrorist watch lists. The FBI has plans that call for conducting audits every 3 years of the states’ handling of terrorism-related NICS transactions. However, given that these NICS background checks involve known or suspected terrorists who could pose homeland security risks, more frequent FBI oversight or centralized management is needed. The Attorney General and the FBI ultimately are responsible for managing NICS, and the FBI is a lead law enforcement agency responsible for combating terrorism. However, the FBI does not have aggregate data on the number of NICS transactions involving known or suspected members of terrorist organizations that have been approved or denied by state agencies to date. Also, the FBI has not assessed the extent to which the states have implemented and followed applicable procedures for handling terrorism-related NICS transactions. Moreover, under a 3-year audit cycle, relevant information from the background checks may have been destroyed pursuant to federal or state laws and therefore may not be available for review. Further, more frequent FBI oversight or centralized management would help address other types of issues we identified—such as several states’ delays in implementing procedures and one state’s mishandling of a terrorism- related NICS transaction. Proper management of NICS transactions with valid matches to terrorist watch list records is important. Thus, we recommend that the Attorney General (1) clarify procedures to ensure that the maximum amount of allowable information from these background checks is consistently shared with counterterrorism officials and (2) either implement more frequent monitoring by the FBI of applicable state agencies or have the FBI centrally manage all terrorism-related NICS background checks. We requested comments on a draft of this report from the Department of Justice. Also, we provided a draft of sections of this report for comment to applicable agencies in the 11 states we contacted. On January 7, 2005, Justice provided us written comments, which were signed by the Acting Assistant Director of the FBI’s Criminal Justice Information Services Division. According to Justice and FBI officials, the draft report was provided for review to Justice’s Office of Legal Policy, the FBI’s NICS Section (within the Criminal Justice Information Services Division), the FBI’s Counterterrorism Division, and the Terrorist Screening Center. Justice agreed with our two recommendations. Specifically, regarding our recommendation to clarify NICS procedures for sharing information from NICS transactions with counterterrorism officials, Justice stated that (1) the written procedures used by the FBI’s NICS Section will be revised and (2) additional written guidance should be provided to applicable state agencies. Regarding our recommendation for more frequent FBI oversight or centralized management of terrorism-related NICS background checks, Justice has requested that the FBI report to the department by the end of January 2005 on the feasibility of having the FBI’s NICS Section process all NICS transactions involving VGTOF records. In its written comments, Justice also provided (1) a detailed discussion of the Brady Act’s provisions relating to the retention and use of NICS information and (2) clarifications on the states’ handling of terrorism- related NICS transactions. These comments have been incorporated in this report where appropriate. The full text of Justice’s written comments is reprinted in appendix III. Officials from 7 of the 11 states we contacted told us they did not have any comments. Officials from the remaining 4 states did not respond to our request for comments. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to interested congressional committees and subcommittees. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report or wish to discuss the matter further, please contact me at (202) 512-8777 or [email protected], or my Assistant Director, Danny R. Burton, at (214) 777-5600 or [email protected]. Other key contributors to this report were Eric Erdman, Lindy Coe-Juell, David Alexander, Katherine Davis, and Geoffrey Hamilton. Our overall objective was to review how the Federal Bureau of Investigation’s (FBI) National Instant Criminal Background Check System (NICS) handles checks of prospective firearms purchasers that hit on and are confirmed to match terrorist watch list records. The FBI and designated state and local criminal justice agencies use NICS to determine whether or not individuals seeking to purchase firearms or apply for firearms permits are prohibited by law from receiving or possessing firearms. Specifically, we addressed the following questions: What terrorist watch lists are searched during NICS background checks? How many NICS transactions have resulted in valid matches with terrorist watch list records? For valid matches, what are federal and state procedures for sharing NICS-related information with federal counterterrorism officials? To what extent does the FBI monitor the states’ handling of NICS transactions with valid matches to terrorist watch list records? What issues, if any, have state agencies encountered in handling such transactions? Also, we obtained summary information on federal and state requirements for retaining information related to NICS transactions with valid matches to terrorist watch list records (see app. II). In performing our work, we reviewed applicable federal laws and regulations, FBI policies and procedures, and relevant statistics. We interviewed federal officials at and reviewed documentation obtained from the Department of Justice’s Office of Legal Policy; the FBI’s Counterterrorism Division; the FBI’s NICS Section and Criminal Justice Information Services Division at Clarksburg, West Virginia; and the Terrorist Screening Center (TSC), which is the multiagency center responsible for consolidating federal terrorist watch lists. Generally, our analyses focused on background checks processed by the FBI’s NICS Section and 11 states during the period February 3, 2004 (when the FBI’s procedures for handling terrorism-related NICS transactions became effective), through June 30, 2004. The 11 states we contacted (California, Colorado, Florida, Hawaii, Illinois, Massachusetts, North Carolina, Pennsylvania, Tennessee, Texas, and Virginia) were those that FBI data indicated—and the states subsequently confirmed—had processed NICS checks (during the period February 3 through June 30, 2004) that resulted in one or more valid matches with terrorist watch list records. To determine what terrorist watch list records are searched during NICS background checks, we interviewed officials from the FBI’s NICS Section and the Criminal Justice Information Services Division—the FBI division responsible for maintaining the Violent Gang and Terrorist Organization File (VGTOF)—and obtained relevant documentation. Also, we interviewed TSC officials and obtained documentation and other relevant information on TSC’s efforts to consolidate federal terrorist watch list records into a single database. Eligible records from TSC’s consolidated database are shared with VGTOF and searched during NICS background checks. To determine the number of NICS transactions that resulted in valid matches with terrorist records in VGTOF—during the period February 3 through June 30, 2004—we interviewed officials from the FBI’s NICS Section and reviewed FBI data. The FBI did not have comprehensive or conclusive information on transactions handled by state agencies, but FBI data indicated that 12 states (California, Colorado, Florida, Georgia, Hawaii, Illinois, Massachusetts, North Carolina, Pennsylvania, Tennessee, Texas, and Virginia) likely had processed one or more NICS transactions with a valid match to terrorist records in VGTOF during this period. We interviewed agency officials in the 12 states to corroborate the FBI data and to obtain additional information about the related background checks (e.g., whether the transactions were allowed to proceed or were denied). We also worked with officials from the FBI’s NICS Section and state agencies to resolve any inconsistencies. For example, our work revealed that 1 of the 12 states (Georgia) had not processed a terrorism-related NICS transaction during the period we reviewed. As such, our subsequent interviews and analysis focused on background checks processed by the FBI’s NICS Section and the remaining 11 states. To determine federal and state procedures for sharing NICS-related information with federal counterterrorism officials, we reviewed applicable federal laws and regulations, including the Brady Handgun Violence Prevention Act and NICS regulations. We also reviewed FBI and state procedures for handling NICS transactions involving terrorist records in VGTOF—procedures that were developed and disseminated under the Department of Justice’s direction. We interviewed officials from the Department of Justice’s Office of Legal Policy, the FBI’s NICS Section, and the 11 states to determine the scope and types of NICS-related information that could be shared with federal counterterrorism officials under applicable procedures. Further, for NICS transactions with valid matches to terrorist records in VGTOF—during the period February 3 through June 30, 2004—we interviewed officials from the FBI’s NICS Section and Counterterrorism Division, TSC, and the 11 states to determine the types of NICS-related information that were shared with counterterrorism officials. To determine the extent to which the FBI has monitored the states’ handling of NICS transactions involving VGTOF records, we interviewed officials from the Department of Justice’s Office of Legal Policy, the FBI’s NICS Section, and state agencies. We reviewed documents the FBI used to notify state agencies about the procedures for handling terrorism-related NICS transactions. We also reviewed data and other information the FBI maintained on transactions handled by the states. Further, we obtained information on the FBI’s plans to periodically audit whether designated state and local criminal justice agencies are utilizing the written procedures for processing NICS transactions involving VGTOF records. To identify issues state agencies have encountered in handling terrorism- related NICS transactions, we interviewed officials from the 11 states. For identified issues, we interviewed officials from the Department of Justice and the FBI’s NICS Section and Counterterrorism Division to discuss the states’ issues and obtain related information. To determine federal and state requirements for retaining information from terrorism-related NICS transactions, we interviewed officials from the FBI’s NICS Section and state agencies and reviewed applicable federal laws and regulations. We also reviewed a Department of Justice report that addressed the length of time the FBI and applicable state agencies retain information related to firearm background checks. Further, we interviewed officials from the FBI and reviewed relevant FBI documents to determine how the federal 24-hour destruction requirement for NICS records of allowed firearms transfers would affect the FBI’s NICS Section and state policies and procedures. We performed our work from April through December 2004 in accordance with generally accepted government auditing standards. We were unable to fully assess the reliability or accuracy of the data regarding valid matches with terrorist records in VGTOF because the data related to ongoing terrorism investigations. However, we discussed the sources of data with FBI, TSC, and state agency officials and worked with them to resolve any inconsistencies. We determined that the data were sufficiently reliable for the purposes of this review. The results of our interviews with officials in the 11 states may not be representative of the views and opinions of others nationwide. On July 21, 2004, the FBI’s NICS Section implemented a provision in federal law that requires any personal identifying information in the NICS database related to allowed firearms transfers to be destroyed within 24 hours after the FBI advises the gun dealer that the transfer may proceed. The law does not provide an exception for retaining information from NICS transactions with valid matches to terrorist records in VGTOF. Thus, information in the NICS database from such transactions also is subject to the federal 24-hour destruction provision. Before the 24-hour destruction provision took effect, federal regulations permitted the retention of all information related to allowed firearms transfers for up to 90 days. The federal 24-hour retention statute does not specifically address whether identifying information in the NICS database related to permit checks— which do not involve gun dealers—is subject to 24-hour destruction. According to the FBI’s NICS Section, the 24-hour destruction requirement does not apply to permit checks. Rather, information related to permit checks is maintained in the NICS database for up to 90 days after the background check is initiated. In implementing the 24-hour destruction provision, the FBI’s NICS Section revised its policies and procedures to allow for the retention of nonidentifying information related to each proceeded background check for up to 90 days (e.g., information about the gun dealer). According to the FBI, by retaining the nonidentifying information, the FBI’s NICS Section can initiate firearm retrieval actions when new information reveals that an individual who was approved to purchase a firearm should not have been. The nonidentifying information is retained for all NICS transactions that are allowed to proceed, including transactions involving subjects of terrorist watch lists. Also, in implementing the 24-hour destruction provision, the FBI’s NICS Section created a new internal classification system for transactions that are “open.” Specifically, if NICS staff cannot make a final determination (i.e., proceed or denied) on a transaction within 3 business days, the NICS Section is to automatically change the status to open. The NICS Section maintains personal identifying information and other details related to open transactions until either (1) a final determination on the transaction is reached or (2) the expiration of the retention period for open transactions, which is a period of no more than 90 days. Regarding terrorism-related NICS transactions, the open designation would be used, for example, if NICS Section staff did not receive responses from FBI field agents within 3 business days. The 24-hour destruction provision did not affect federal policies for retaining NICS records related to denied firearms transactions. Under provisions in NICS regulations, personal identifying information and other details related to denied firearms transactions are retained indefinitely. Also, according to Justice and FBI officials, there are no limitations on the retention of NICS information by TSC or counterterrorism officials, who received the information to verify identities and determine whether firearm-possession prohibitors exist. Among the states, requirements vary for retaining records of allowed transfers of firearms. Some states purge a firearm transaction record almost immediately after the firearm sale is approved, while other states retain such records for longer periods of time. Under NICS regulations, state records are not subject to the federal 24-hour destruction requirement if the records are part of a system created and maintained pursuant to independent state law. Thus, states with their own state law provisions may retain records of allowed firearms transfers for longer than 24 hours. The retention of state records related to denied firearms transactions varies. | Membership in a terrorist organization does not prohibit a person from owning a gun under current law. Thus, during presale screening of prospective firearms purchasers, the National Instant Criminal Background Check System historically did not utilize terrorist watch list records. However, for homeland security and other purposes, the Federal Bureau of Investigation (FBI) and applicable state agencies began receiving notices (effective February 3, 2004) when such screening involved watch lists records. GAO determined (1) how many checks have resulted in valid matches with terrorist watch list records, (2) procedures for providing federal counterterrorism officials relevant information from valid-match background checks, and (3) the extent to which the FBI monitors or audits the states' handling of such checks. During the period GAO reviewed--February 3 through June 30, 2004--a total of 44 firearm-related background checks handled by the FBI and applicable state agencies resulted in valid matches with terrorist watch list records. Of this total, 35 transactions were allowed to proceed because the background checks found no prohibiting information, such as felony convictions, illegal immigrant status, or other disqualifying factors. Federal and state procedures--developed and disseminated under the Department of Justice's direction--do not address the specific types of information from valid-match background checks that can or should be provided to federal counterterrorism officials or the sources from which such information can be obtained. Justice officials told GAO that information from the background check system is not to be used for general law enforcement purposes but can be shared with law enforcement agents or other government agencies in the legitimate pursuit of establishing a match between the prospective gun buyer and a terrorist watch list record and in the search for information that could prohibit the firearm transfer. Most state agency personnel GAO contacted were not aware of any restrictions or limitations on providing valid-match information to counterterrorism officials. FBI counterterrorism officials told GAO that routinely receiving all available personal identifying information and other details from valid-match background checks could be useful in conducting investigations. As part of routine audits the FBI conducts every 3 years, the Bureau plans to assess the states' handling of firearm-related background checks involving terrorist watch list records. However, given that these background checks involve known or suspected terrorists who could pose homeland security risks, more frequent FBI oversight or centralized management would help ensure that suspected terrorists who have disqualifying factors do not obtain firearms in violation of the law. The Attorney General and the FBI ultimately are responsible for managing the background check system, although they have yet to assess the states' compliance with applicable procedures for handling terrorism-related checks. Also, more frequent FBI oversight or centralized management would help address other types of issues GAO identified--such as several states' delays in implementing procedures and one state's mishandling of a terrorism-related background check. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Among other impacts, climate change could threaten coastal areas with rising sea levels, alter agricultural productivity, and increase the intensity and frequency of severe weather events such as floods, drought, and hurricanes that have cost the nation tens of billions of dollars in damages over the past decade. For example, Congress provided around $60 billion in budget authority for disaster assistance after Superstorm Sandy.These impacts pose significant financial risks, but the federal government is not well positioned to address this fiscal exposure, partly because of the complex nature of the issue. Given these challenges and the nation’s fiscal condition, in February 2013, we added Limiting the Federal Government’s Fiscal Exposure by Better Managing Climate Change Risks to our list of high-risk areas. Climate-related impacts will result in increased fiscal exposures for the federal government from many areas, including, but not limited to its role as (1) the insurer of property and crops vulnerable to climate impacts, (2) the provider of aid in response to disasters, (3) the owner or operator of extensive infrastructure such as defense facilities and federal property vulnerable to climate impacts, and (4) the provider of data and technical assistance to state and local governments responsible for managing the impacts of climate change on their activities. The financial risks from two important federal insurance programs—the National Flood Insurance Program (NFIP) administered by the Federal Emergency Management Agency (FEMA) and the Federal Crop Insurance Corporation (FCIC) administered by the United States Department of Agriculture (USDA)—create a significant fiscal exposure. In 2012, the NFIP had property coverage of over $1.2 trillion and the FCIC had crop coverage of almost $120 billion. NFIP has been on our High Risk List since March 2006 because of concerns about its long-term financial solvency and related operational issues. While Congress and FEMA intended to finance NFIP with premiums collected from policyholders and not with tax dollars, the program was, by design, not intended to pay for itself. As of December 2013, FEMA’s debt from flood insurance payments totaled about $24 billion—up from $17.8 billion before Superstorm Sandy—and FEMA had not repaid any principal on the loan since 2010. Further, the federal government’s crop insurance costs have increased in recent years for a variety of reasons, more than doubling from $3.4 billion in fiscal year 2001 to $7.6 billion in fiscal year 2012. In March 2007, we reported that both of these programs’ exposure to weather-related losses had grown substantially, and that FEMA and USDA had done little to develop the information necessary to understand their long-term exposure resulting from climate change. recommended that the Secretaries of Agriculture and Homeland Security analyze the potential long-term fiscal implications of climate change on federal insurance programs and report their findings to Congress. The agencies agreed with the recommendation and contracted with experts to study their programs’ long-term exposure from climate change. Both agencies have incorporated the findings of the reports into their climate change adaptation plans—as directed by instructions and guidance implementing Executive Order 13514 on Federal Leadership in Environmental, Energy, and Economic Performance. We are currently examining how these programs account for climate change in their activities. GAO, Climate Change: Financial Risks to Federal and Private Insurers in Coming Decades Are Potentially Significant, GAO-07-285 (Washington, D.C.: Mar. 16, 2007). topography, coastal erosion areas, changing lake levels, future changes in sea levels, and intensity of hurricanes in updating its flood maps. The Biggert-Waters Act also reauthorized NFIP through 2017 and made other significant changes to the program, including removing subsidized premium rates for certain properties, eliminating the grandfathering of prior premium rates when a property is remapped, and requiring FEMA to create a reserve fund. While these changes may help put NFIP on a path to financial solvency, their ultimate effect is not yet known. In addition, the program faces challenges in making the changes. For example, implementation of certain changes was delayed by provisions in the Consolidated Appropriations Act of 2014, and S. 1926, which passed the Senate on January 30, 2014, would delay the implementation of certain rate increases contained in the Biggert-Waters Act. As we have previously reported, such delays to rate increases may help address affordability concerns, but they would likely continue to increase NFIP’s long-term burden on taxpayers. In the event of a major disaster, federal funding for response and recovery comes from the Disaster Relief Fund managed by FEMA, and disaster aid programs of other participating federal agencies. The federal government does not fully budget for these costs, thus creating a large fiscal exposure. We reported, in September 2012, that disaster declarations have increased over recent decades to a record of 98 in fiscal year 2011 compared with 65 in 2004. Over that period, FEMA obligated over $80 billion in federal assistance for disasters. We also found that FEMA has had difficulty implementing long-standing plans to assess national preparedness capabilities and that FEMA’s indicator for determining whether to recommend that a jurisdiction receive disaster assistance does not accurately reflect the ability of state and local governments to respond to disasters. Had FEMA adjusted its indicator to reflect changes in personal income and inflation, 44 percent and 25 percent fewer disaster declarations, respectively, would have met the threshold for public assistance during fiscal years 2004 through 2011. In September 2012, we recommended, among other things, that FEMA develop a methodology to more accurately assess a jurisdiction’s capability to respond to and recover from a disaster without federal assistance. FEMA concurred with this recommendation. The federal government owns and operates hundreds of thousands of buildings and facilities that a changing climate could affect. For example, in its 2010 Quadrennial Defense Review, the Department of Defense (DOD) recognized the risk to its facilities posed by climate change, noting that the department must assess potential impacts and adapt as required. We plan to report later this year on DOD’s management of climate change risks at over 500,000 defense facilities. In addition, the federal government manages about 650 million acres––nearly 30 percent of the land in the United States––for a variety of purposes, such as recreation, grazing, timber, and fish and wildlife. In 2007, we recommended that that the Secretaries of Agriculture, Commerce, and the Interior develop guidance for their resource managers that explains how they expect to address the effects of climate change, and the three departments generally agreed with this recommendation. However, as we showed in our May 2013 report, resource managers still struggled to incorporate climate-related information into their day-to-day activities, despite the creation of strategic policy documents and high-level agency guidance. The federal government invests billions of dollars annually in infrastructure projects that state and local governments prioritize and supervise. In total, the United States has about 4 million miles of roads and 30,000 wastewater treatment and collection facilities. According to a 2010 Congressional Budget Office report, total public spending on transportation and water infrastructure exceeds $300 billion annually, with roughly 25 percent of this amount coming from the federal government and the rest coming from state and local governments. These projects have large up-front capital investments and long lead times that require decisions about addressing climate change before its potential effects are discernable. The federal government plays a limited role in project-level planning for transportation and wastewater infrastructure, and state and local efforts to consider climate change in infrastructure planning have occurred primarily on a limited, ad hoc basis. Infrastructure is typically designed to withstand and operate within historical climate patterns. However, according to NRC, as the climate changes and historical patterns—in particular, those related to extreme weather events—no longer provide reliable predictions of the future, infrastructure designs may underestimate the climate-related impacts to infrastructure over its design life, which can range as long as 50 to 100 years. These impacts can increase the operating and maintenance costs of infrastructure or decrease its life span, or both, leading to social, economic, and environmental impacts. For example, the National Oceanic and Atmospheric Administration estimates that, within 15 years, segments of Louisiana State Highway 1— the only road access to Port Fourchon, which services virtually all deep- sea oil operations in the Gulf of Mexico, or about 18 percent of the nation’s oil supply—will be inundated by tides an average of 30 times annually due to relative sea level rise. Flooding of this road effectively closes this port. Because of Port Fourchon’s significance to the oil industry at the national, state, and local levels, the U.S. Department of Homeland Security, in July 2011, estimated that a closure of 90 days could reduce the national gross domestic product by $7.8 billion. Figure 1 shows Louisiana State Highway 1 leading to Port Fourchon. Despite the risks posed by climate change, we found, in April 2013, that infrastructure decision makers have not systematically incorporated potential climate change impacts in planning for roads, bridges, and wastewater management systems because, among other factors, they face challenges identifying and obtaining available climate change information best suited for their projects. Even where good scientific information is available, it may not be in the actionable, practical form needed for decision makers to use in planning and designing infrastructure. Such decision makers work with traditional engineering processes, which often require very specific and discrete information. Moreover, local decision makers—who, in this case, specialize in infrastructure planning, not climate science—need assistance from experts who can help them translate available climate change information into something that is locally relevant. In our site visits to a limited number of locations where decision makers overcame these challenges— including Louisiana State Highway 1—state and local officials emphasized the role that the federal government could play in helping to increase their resilience. Any effective adaptation strategy must recognize that state and local governments are on the front lines in both responding to immediate weather-related disasters and in preparing for the potential longer-term impacts associated with climate change. We reported, in October 2009, that insufficient site-specific data—such as local temperature and precipitation projections—complicate state and local decisions to justify the current costs of adaptation efforts for potentially less certain future benefits. Executive Office of the President develop a strategic plan for adaptation that, among other things, identifies mechanisms to increase the capacity of federal, state, and local agencies to incorporate information about current and potential climate change impacts into government decision making. USGCRP’s April 2012 strategic plan for climate change science recognizes this need by identifying enhanced information management and sharing as a key objective. GAO, Climate Change Adaptation: Strategic Federal Planning Could Help Government Officials Make More Informed Decisions, GAO-10-113 (Washington, D.C.: Oct. 7, 2009). designated by the Executive Office of the President work with relevant agencies to identify for decision makers the “best available” climate- related information for infrastructure planning and update this information over time, and to clarify sources of local assistance for incorporating climate-related information and analysis into infrastructure planning, and communicate how such assistance will be provided over time. They have not directly responded to these recommendations, but the President’s June 2013 Climate Action Plan and November 2013 Executive Order 13653 on Preparing the United States for the Impacts of Climate Change drew attention to these issues. For example, the Executive Order directs numerous federal agencies, supported by USGCRP, to work together to develop and provide authoritative, easily accessible, usable, and timely data, information, and decision-support tools on climate preparedness and resilience. We also have work under way exploring, among other things, the risk extreme weather events and climate change pose to defense facilities, public health, agriculture, public transit systems, and federal insurance programs. This work—within the framework of the February 2013 high- risk designation—may identify other steps the federal government could take to limit its fiscal exposure and make our communities more resilient to extreme weather events. Chairman Carper, Ranking Member Coburn, and Members of the Committee, this concludes my prepared statement. I would be pleased to answer any questions you have at this time. If you or your staff members have any questions about this testimony, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Alfredo Gomez, Director; Michael Hix, Assistant Director; and Heather Chartier, Diantha Garms, Cindy Gilbert, Richard Johnson, Joseph Dean “Pep” Thompson, and Lisa Van Arsdale made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | According to the United States Global Change Research Program, the costs and impacts of weather disasters resulting from floods, drought, and other events are expected to increase in significance as previously “rare” events become more common and intense. These impacts pose financial risks to the federal government. While it is not possible to link any individual weather event to climate change, these events provide insight into the potential climate-related vulnerabilities the United States faces. GAO focuses particular attention on government operations it identifies as posing a “high risk” to the American taxpayer and, in February 2013, added to its High Risk List the area Limiting the Federal Government's Fiscal Exposure by Better Managing Climate Change Risks . GAO's past work identified a variety of fiscal exposures—responsibilities, programs, and activities that may either legally commit the federal government to future spending or create the expectation for future spending in response to extreme weather events. This testimony is based on reports GAO issued from March 2007 to November 2013 that address these issues. GAO is not making new recommendations but made numerous recommendations in prior reports on these topics, which are in varying states of implementation by the Executive Office of the President and relevant federal agencies. The federal government has opportunities to limit its exposure and increase the nation's resilience to extreme weather events. Since 1980, the U.S. has experienced 151 weather disasters with damages exceeding 1 billion dollars each. This testimony focuses on 4 areas where the government could limit its fiscal exposure. Property and crop insurance . The financial risks from two federal insurance programs—the National Flood Insurance Program administered by the Federal Emergency Management Agency (FEMA) and the Federal Crop Insurance Corporation (FCIC)—create a significant fiscal exposure. In 2012, the NFIP had property coverage of over $1.2 trillion and the FCIC had crop coverage of almost $120 billion. As of December 2013, FEMA's debt from flood insurance payments totaled about $24 billion. For various reasons, FCIC's costs more than doubled from $3.4 billion in fiscal year 2001 to $7.6 billion in fiscal year 2012. In 2007, GAO found that the agencies responsible for these programs needed to develop information on their long-term exposure to climate change. The Biggert-Waters Flood Insurance Reform Act of 2012 requires FEMA to use information on future changes in sea levels and other factors in updating flood maps used to set insurance rates. Private insurers are also studying how to include climate change in rate setting. GAO is currently examining the extent to which private and federal insurance programs address risks from climate change. Disaster aid . The federal government does not fully budget for recovery activities after major disasters, thus creating a large fiscal exposure. GAO reported in 2012 that disaster declarations have increased to a record 98 in fiscal year 2011 compared with 65 in 2004. Over that period, FEMA obligated over $80 billion for disaster aid. GAO's past work recommended that FEMA address the federal fiscal exposure from disaster assistance. Owner and operator of infrastructure . The federal government owns and operates hundreds of thousands of facilities that a changing climate could affect. For example, in its 2010 Quadrennial Defense Review, the Department of Defense (DOD) recognized the risk to its facilities posed by climate change, noting that the department must assess the potential impacts and adapt. GAO plans to report later this year on DOD's management of climate change risks at over 500,000 defense facilities. Provider of technical assistance to state and local governments . The federal government invests billions of dollars annually in infrastructure projects that state and local governments prioritize, such as roads and bridges. Total public spending on transportation and water infrastructure exceeds $300 billion annually, with about 25 percent coming from the federal government and the rest from state and local governments. GAO's April 2013 report on infrastructure adaptation concluded that the federal government could help state and local efforts to increase their resilience by (1) improving access to and use of available climate-related information, (2) providing officials with improved access to local assistance, and (3) helping officials consider climate change in their planning processes. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The federal government’s information resources and technology management structure has its foundation in six laws: the Federal Records Act, the Privacy Act of 1974, the Computer Security Act of 1987, the Paperwork Reduction Act of 1995,the Clinger-Cohen Act of 1996, and the Government Paperwork Elimination Act of 1998. Taken together, these laws largely lay out the information resources and technology management responsibilities of the Office of Management and Budget (OMB), federal agencies, and other entities, such as the National Institute of Standards and Technology. In general, under the government’s current legislative framework, OMB is responsible for providing direction on governmentwide information resources and technology management and overseeing agency activities in these areas, including analyzing major agency information technology investments. Among OMB’s responsibilities are ensuring agency integration of information resources management plans, program plans, and budgets for acquisition and use of information technology and the efficiency and effectiveness of interagency information technology initiatives; developing, as part of the budget process, a mechanism for analyzing, tracking, and evaluating the risks and results of all major capital investments made by an executive agency for information systems; directing and overseeing implementation of policy, principles, standards, and guidelines for the dissemination of and access to public information; encouraging agency heads to develop and use best practices in reviewing proposed agency information collections to minimize information collection burdens and maximize information utility and benefit; and developing and overseeing implementation of privacy and security policies, principles, standards, and guidelines. Agencies, in turn, are accountable for the effective and efficient development, acquisition, and use of information technology in their organizations. For example, the Paperwork Reduction Act of 1995 and the Clinger-Cohen Act of 1996 require agency heads, acting through agency CIOs, to better link their information technology planning and investment decisions to program missions and goals; develop and implement a sound information technology architecture; implement and enforce information technology management policies, procedures, standards, and guidelines; establish policies and procedures for ensuring that information technology systems provide reliable, consistent, and timely financial or program performance data; and implement and enforce applicable policies, procedures, standards, and guidelines on privacy, security, disclosure, and information sharing. Another important organization in federal information resources and technology management—the CIO Council—was established by the President in July 1996. Specifically, Executive Order 13011 established the CIO Council as the principal interagency forum for improving agency practices on such matters as the design, modernization, use, sharing, and performance of agency information resources. The Council, chaired by OMB’s Deputy Director for Management with a Vice Chair selected from among its members, is tasked with (1) developing recommendations for overall federal information technology management policy, procedures, and standards, (2) sharing experiences, ideas, and promising practices, (3) identifying opportunities, making recommendations for, and sponsoring cooperation in using information resources, (4) assessing and addressing workforce issues, (5) making recommendations and providing advice to appropriate executive agencies and organizations, and (6) seeking the views of various organizations. Because it is essentially an advisory body, the CIO Council must rely on OMB’s support to see that its recommendations are implemented through federal information management policies, procedures, and standards. With respect to Council resources, according to its charter, OMB and the General Services Administration are to provide support and assistance, which can be augmented by other Council members as necessary. CIOs or equivalent positions exist at the state level and in other countries, although no single preferred model has emerged. The specific roles, responsibilities, and authorities assigned to the CIO or CIO-type position vary, reflecting the needs and priorities of the particular government. This is consistent with research presented in our ExecutiveGuide:Maximizing theSuccessofChiefInformationOfficers—LearningfromLeading Organizations,which points out that there is no one right way to establish a CIO position and that leading organizations are careful to ensure that information management leadership positions are appropriately defined and implemented to meet their unique business needs. Regardless of the differences in approach, the success of a CIO will typically rest on the application of certain fundamental principles. While our executive guide was specifically intended to help individual federal agencies maximize the success of their CIOs, several of the principles outlined in the guide also apply to the establishment of a governmentwide CIO. In particular, our research of leading organizations demonstrated that it is important for the organization to employ enterprisewide leaders who embrace the critical role of information technology and reach agreement on the CIO’s leadership role. Moreover, the CIO must possess sufficient stature within the organization to influence the planning process. We have not evaluated the effectiveness of state and foreign government CIOs or equivalent positions; however, these positions appear to apply some of these same principles. With respect to the states, according to the National Association of State Information Resource Executives, the vast majority have senior executives with statewide authority for IT. State CIOs are usually in charge of developing statewide IT plans and approving statewide technical IT standards, budgets, personnel classifications, salaries, and resource acquisitions although the CIO’s authority depends on the specific needs and priorities of the governors. Many state CIOs report directly to the state’s governor with the trend moving in that direction. In some cases, the CIO is guided by an IT advisory board. As the president of the National Association of State Information Resource Executives noted in prior testimony before this Subcommittee, “IT is how business is delivered in government; therefore, the CIO must be a party to the highest level of business decisions . . . needs to inspire the leaders to dedicate political capital to the IT agenda.” National governments in other countries have also established a central information technology coordinating authority and, like the states, have used different implementation approaches in doing so. Preliminary results of a recent survey conducted by the International Council for Information Technology in Government Administration indicate that 8 of 11 countries surveyed have a governmentwide CIO, although the structure, roles, and responsibilities varied. Let me briefly describe the approaches employed by three foreign governments to illustrate this variety. Australia’s Department of Communications, Information Technology and the Arts has responsibility for, among other things, (1) providing strategic advice and support to the government for moving Australia ahead in the information economy and (2) developing policies and procedures and helping to coordinate crosscutting efforts toward e-government. The United Kingdom’s Office of the E-Envoy acts in a capacity analogous to a “national government” CIO in that it works to coordinate activities across government and with public, private, and international groups to (1) develop a legal, regulatory and fiscal environment that facilitates e- commerce, (2) help individuals and businesses take full advantage of the opportunities provided by information and communications technologies, (3) ensure that the government of the United Kingdom applies global best practices in its use of information and communications technologies, and (4) ensure that government and business decisions are informed by reliable and accurate e-commerce monitoring and analysis. Canada’s Office of the CIO is contained within the Treasury Board Secretariat, a crosscutting organization whose mission is to manage the government’s human, financial, information, and technology resources. The CIO is responsible for determining and implementing a strategy that will accomplish governmentwide IT goals. Moreover, the CIO is to (1) provide leadership, coordination and broad direction in the use of IT; (2) facilitate enterprisewide solutions to crosscutting IT issues; and (3) serve as technology strategist and expert adviser to Treasury Board Ministers and senior officials across government. The CIO also develops a Strategic Directions document that focuses on the management of critical IT, information management, and service delivery issues facing the government. This document is updated regularly and is used by departments and agencies as a guide. While these countries’ approaches differ in terms of specific CIO or CIO- type roles and responsibilities, in all cases the organization has responsibility for coordinating governmentwide implementation of e- government and providing leadership in the development of the government’s IT strategy and standards. As you know, the Congress is currently considering legislation to establish a federal CIO. Specifically, two proposals before this Subcommittee—H.R. 4670, the Chief Information Officer of the United States Act of 2000, and H.R. 5024, the Federal Information Policy Act of 2000—share a common call for central IT leadership from a federal CIO, although they differ in how the roles, responsibilities, and authorities of the position would be established. Several similarities exist in the two bills: Both elevate the visibility and focus of information resources and technology management by establishing a federal CIO who (1) is appointed by the President with the advice and consent of the Senate, (2) reports directly to the President, (3) is a Cabinet-level official, and (4) provides central leadership. The importance of such high level visibility should not be underestimated. Our studies of leading public and private- sector organizations have found that successful CIOs commonly are full members of executive management teams. Both leave intact OMB’s role and responsibility to review and ultimately approve agencies’ information technology funding requests for inclusion in the President’s budget submitted to the Congress each year. However, both require the federal CIO to review and recommend to the President and the Director of OMB changes to the IT budget proposals submitted by agencies. As we have previously testified before your Subcommittee, an integrated approach to budgeting and feedback is absolutely critical for progress in government performance and management.Certainly, close coordination between the federal CIO and OMB would be necessary to coordinate the CIO’s technical oversight and OMB’s budget responsibilities. Finally, both bills establish the existing federal CIO Council in statute. Just as with the Chief Financial Officers’ Council, there are important benefits associated with having a strong statutory base for the CIO Council. Legislative foundations transcend presidential administrations, fluctuating policy agendas, and the frequent turnover of senior appointees in the executive branch. Having congressional consensus and support for the Council helps ensure continuity of purpose over time and allows constructive dialogue between the two branches of government on rapidly changing management and information technology issues before the Council. Moreover, as prime users of performance and financial information, having the Council statutorily based can help provide the Congress with an effective oversight tool in gauging the progress and impact of the Council on advancing effective involvement of agency CIOs in governmentwide IT initiatives. The two bills also set forth duties that are consistent with, and expand upon, the duties of the current CIO Council. For example, the Council would be responsible for coordinating the acquisition and provision of common infrastructure services to facilitate communication and data exchange among agencies and with state, local, and tribal governments. While the bills have similarities, as a result of contrasting approaches, the two bills have major differences. In particular, H.R. 5024 vests in the federal CIO the information resources and technology management responsibilities currently assigned to OMB as well as oversight of related activities of the General Services Administration and promulgation of information system standards developed by the National Institute of Standards and Technology. On the other hand, H.R 4670 generally does not change the responsibilities of these agencies; instead it calls on the federal CIO to advise agencies and the Director of OMB and to consult with nonfederal entities, such as state governments and the private sector. Appendix I provides more detail on how information resources and technology management functions granted to the federal CIO compare among the two bills, and with OMB’s current responsibilities. Let me turn now to a few implementation issues associated with both of these bills. One such issue common to both is that effective implementation will require that appropriate presidential attention and support be given to the new federal CIO position and that adequate resources, including staffing and funding, be provided. As discussed below, each bill likewise has unique strengths and challenges. H.R. 4670: This bill creates an Office of Information Technology within the Executive Office of the President, headed by a federal CIO, with a limit of 12 staff. Among the duties assigned to the CIO are (1) providing leadership in innovative use of information technology, (2) identifying opportunities and coordinate major multi-agency information technology initiatives, and (3) consulting with leaders in information technology management in state governments, the private sector, and foreign governments. OMB’s statutory responsibilities related to information resources and technology management would remain largely unchanged under this bill. One strength of this bill is that it would allow a federal CIO to focus full- time attention on promoting key information technology policy and crosscutting issues within government and in partnership with other organizations without direct responsibility for implementation and oversight, which would remain the responsibility of OMB and the agencies. Moreover, the federal CIO could promote collaboration among agencies on crosscutting issues, adding Cabinet-level support to efforts now initiated and sponsored by the CIO Council. Further, the federal CIO could establish and/or buttress partnerships with state, local, and tribal governments, the private sector, or foreign entities. Such partnerships were key to the government’s Year 2000 (Y2K) success and could be essential to addressing other information technology issues, such as critical infrastructure protection, since private-sector systems control most of our nation’s critical infrastructures (e.g., energy, telecommunications, financial services, transportation, and vital human services). A major challenge associated with H.R. 4670’s approach, on the other hand, is that federal information technology leadership would be shared. While the CIO would be the President’s principal adviser on these issues, OMB would retain critical statutory responsibilities in this area. For example, both the federal CIO and OMB would have a role in overseeing the government’s IT and interagency initiatives. Certainly, it would be crucial for the OMB Director and the federal CIO to mutually support each other and work effectively together to ensure that their respective roles and responsibilities are clearly communicated. Without a mutually constructive working relationship with OMB, the federal CIO’s ability to achieve the potential improvements in IT management and cross-agency collaboration would be impaired. H.R. 5024: This bill establishes an Office of Information Policy within the Executive Office of the President and headed by a federal CIO. The bill would substantially change the government’s existing statutory information resources and technology management framework because it shifts much of OMB’s responsibilities in these areas to the federal CIO. For example, it calls for the federal CIO to develop and oversee the implementation of policies, principles, standards, and guidance with respect to (1) information technology, (2) privacy and security, and (3) information dissemination. A strength of this approach would be the single, central focus for information resources and technology management in the federal government. A primary concern we have with OMB’s current structure as it relates to information resources and technology management is that, in addition to their responsibilities in these areas, both the Deputy Director for Management and the Administrator of the Office of Information and Regulatory Affairs (OIRA) have other significant duties, which necessarily restrict the amount of attention that they can give to information resources and technology management issues.For example, much of OIRA is staffed to act on 3,000 to 5,000 information collection requests from agencies per year, review about 500 proposed and final rules each year, and to calculate the costs and benefits of all federal regulations. A federal CIO, like agency CIOs, should be primarily concerned with information resources and technology management. This bill would clearly address this concern. Another important strength of H.R. 5024 is that the federal CIO would be the sole central focus for information resources and technology management and could be used to resolve potential conflicts stemming from conflicting perspectives or goals within the executive branch agencies. In contrast, a major challenge associated with implementing H.R. 5024 is that by removing much of the responsibility for information resources and technology management from OMB, the federal CIO could lose the leverage associated with OMB’s budget-review role. A strong linkage with the budget formulation process is often a key factor in gaining serious attention for management initiatives throughout government, and reinforces the priorities of federal agencies’ management goals. Regardless of approach, we agree that strong and effective central information resources and technology management leadership is needed in the federal government. A central focal point such as a federal CIO can play the essential role of ensuring that attention in these areas is sustained. Increasingly, the challenges the government faces are multidimensional problems that cut across numerous programs, agencies, and governmental tools. Although the respective departments and agencies should have the primary responsibility and accountability to address their own issues—and both bills maintain these agency roles— central leadership has the responsibility to keep everybody focused on the big picture by identifying the agenda of governmentwide issues needing attention and ensuring that related efforts are complementary rather than duplicative. Another task facing central leadership is serving as a catalyst and strategist to prompt agencies and other critical players to come to the table and take ownership for addressing the agenda of governmentwide information resources and technology management issues. In the legislative deliberations on the Clinger-Cohen Act, we supported strengthened central management through the creation of a formal CIO position for the federal government.A CIO for the federal government could provide a strong, central point of coordination for the full range of governmentwide information resources management and technology issues, including (1) reengineering and/or consolidating interagency or governmentwide process and technology infrastructure; (2) managing shared assets; and (3) evaluating attention, progress evaluations, and assistance provided to high-risk, complex information systems modernization efforts. In particular, a federal CIO could provide sponsorship, direction, and sustained focus on the major challenges the government is facing in areas such as critical infrastructure protection and security, e-government, and large-scale IT investments. For example, to be successful, e-government initiatives designed to improve citizen access to government must overcome some of the basic challenges that have plagued information systems for decades – lack of executive level sponsorship, involvement, and controls; inadequate attention to business and technical architectures; adherence to standards; and security. In the case of e-government, a CIO could (1) help set priorities for the federal government; (2) ensure that agencies consider interagency web site possibilities, including how best to implement portals or central web access points that provide citizens access to similar government services; and (3) help establish funding priorities, especially for crosscutting e-government initiatives. The government’s success in combating the Year 2000 problem demonstrated the benefit of strong central leadership. As our Year 2000 lessons learned report being released today makes clear, the leadership of the Chair of the President’s Council on Year 2000 Conversion was invaluable in combating the Year 2000 problem.Under the Chair’s leadership, the government’s actions went beyond the boundaries of individual programs or agencies and involved governmentwide oversight, interagency cooperation, and cooperation with partners, such as state and local governments, the private sector, and foreign governments. It is important to maintain this same momentum of executive-level attention to information management and technology decisions within the federal government. The information issues confronting the government in the new Internet-based technology environment rapidly evolve and carry significant impact for future directions. A federal CIO could maintain and build upon Y2K actions in leading the government’s future IT endeavors. Accordingly, our Y2K lessons learned report calls for the Congress to consider establishing a formal chief information officer position for the federal government to provide central leadership and support. Consensus has not been reached within the federal community on the need for a federal CIO. Department and agency responses to questions developed by the Chairman and Ranking Minority Member of the Senate Committee on Governmental Affairs regarding opinions about the need for a federal CIO found mixed reactions. In addition, at our March 2000 Y2K Lessons Learned Summit, which included a broad range of public and private-sector IT managers and policymakers, some participants did not agree or were uncertain about whether a federal CIO was needed. Further, in response to a question before this Subcommittee on the need for a federal IT leader accountable to the President, the Director of OMB stated that OMB’s Deputy Director for Management, working with the head of the Office of Information and Regulatory Affairs, can be expected to take a federal information technology leadership role. The Director further stated that he believed that “the right answer is to figure out how to continue to use the authority and the leadership responsibilities at the Office of Management and Budget to play a lead role in this area.” In conclusion, Mr. Chairman, the two bills offered by members of this Subcommittee both deal with the need for central leadership, while addressing the sharing of responsibilities with OMB in different ways. Both bills offer different approaches to problems that have been identified and should be dealt with in order to increase the government’s ability to use the information resources at its disposal effectively, securely, and with the best service to the American people. Regardless of approach, a central focal point such as a federal CIO can play the essential role of ensuring that attention to information technology issues is sustained. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other members of the Subcommittee may have at this time. For information about this testimony, please contact me at (202) 512-6240 or by e-mail at [email protected]. Individuals making key contributions to this testimony include John Christian, Lester Diamond, Tamra Goldstein, Linda Lambert, Thomas Noone, David Plocher, and Tomas Ramirez. OMB’s Current FunctionsDevelop, as part of the budget process, a mechanism for analyzing, tracking, and evaluating the risks and results of all major capital investments made by an executive agency for information systems. Review and recommend to the President and the Director of OMB changes to budget and legislative proposals of agencies. Review and recommend to the President and the Director of OMB changes to budget and legislative proposals of agencies. Implement periodic budgetary reviews of agency information resources management activities to ascertain efficiency and effectiveness of IT in improving agency mission performance. Advise and assist the Director of OMB in developing, as part of the budget process, a mechanism for analyzing, tracking, and evaluating the risks and results of all major capital investments made by an executive agency for information systems. Take actions through the budgetary and appropriations management process to enforce agency accountability for information resources management and IT investments, including the reduction of funds. Implement periodic budgetary reviews of agency information resources management activities to ascertain efficiency and effectiveness of IT in improving agency mission performance. Serves as the Chairperson of the CIO Council, established by the bill in statute. Request that the Director of OMB take action, including involving the budgetary or appropriations management process, to enforce agency accountability for information resources management and IT investments, including the reduction of funds. Serves as the Chairperson of the CIO Council, established by the bill in statute. The Deputy Director for Management serves as the Chairperson of the CIO Council, which was created by Executive Order. In consultation with the Administrator of the National Telecommunications and Information Administration, develop and implement procedures for the use and acceptance of electronic signatures by agencies by April 21, 2000. Advise the Director of OMB on electronic records. In consultation with the Director of OMB and the Administrator of the National Telecommunications and Information Administration, develop and implement procedures for the use and acceptance of electronic signatures by agencies by October 1, 2000. OMB’s Current Functionspracticable. Develop and implement procedures to permit private employers to store and file electronically with agencies forms containing information pertaining to the employees of such employers. In consultation with the Director of OMB, develop and implement procedures to permit private employers to store and file electronically with agencies forms containing information pertaining to the employees of such employers. In consultation with the Administrator of the National Telecommunications and Information Administration study and periodically report on the use of electronic signatures. In consultation with the Director of OMB and the Administrator of the National Telecommunications and Information Administration study and periodically report on the use of electronic signatures. Provide direction and oversee activities of agencies with respect to the dissemination of and public access to information. Advise the Director of OMB on information dissemination. Assisted by the CIO Council and others, monitor the implementation of the requirements of the Government Paperwork Elimination Act, the Electronic Signatures in Global and National Commerce Act and related laws. Provide direction and oversee activities of agencies with respect to the dissemination of and public access to information. Foster greater sharing, dissemination, and access to public information. Foster greater sharing, dissemination, and access to public information. Develop and oversee the implementation of policies, principles, standards, and guidance with respect to information dissemination. Develop and oversee the implementation of policies, principles, standards, and guidance with respect to information dissemination. Cause to be established and oversee an electronic Government Information Locator Service (GILS). Develop, coordinate, and oversee the implementation of uniform information resources management policies, principles, standards, and guidelines. Cause to be established and oversee an electronic GILS. Advise the Director of OMB on information resources management policy. Develop, coordinate, and oversee the implementation of uniform information resources management policies, principles, standards, and guidelines. OMB’s Current Functionsimplementation of best practices in information resources management. implementation of best practices in information resources management. Oversee agency integration of program and management functions with information resources management functions. Oversee agency integration of program and management functions with information resources management functions. In consultation with the Administrator of General Services, the Director of the National Institute of Standards and Technology, the Archivist of the United States, and the Director of the Office of Personnel Management, develop and maintain a governmentwide strategic plan for information resources management. In consultation with the Director of OMB, the Administrator of General Services, the Director of the National Institute of Standards and Technology, the Archivist of the United States, the Director of the Office of Personnel Management, and the CIO Council, develop and maintain a governmentwide strategic plan for information resources management. Initiate and review proposals for changes in legislation, regulations, and agency procedures to improve information resources management practices. Initiate and review proposals for changes in legislation, regulations, and agency procedures to improve information resources management practices. Monitor information resources management training for agency personnel. Monitor information resources management training for agency personnel. Keep the Congress informed on the use of information resources management best practices to improve agency program performance. Keep the Congress informed on the use of information resources management best practices to improve agency program performance. Periodically review agency information resources management activities. Periodically review agency information resources management activities. Report annually to the Congress on information resources management. Serve as the principal adviser to the President on matters relating to the development, application, and management of IT by the federal government. OMB’s Current Functionspolicies, principles, standards, and guidelines for IT functions and activities. Ensure that agencies integrate information resources plans, program plans, and budgets for acquisition and use of technology. Advise the President on opportunities to use IT to improve the efficiency and effectiveness of programs and operations of the federal government. information resources by the federal government. Advise the Director of OMB on IT management. Develop and oversee the implementation of policies, principles, standards, and guidelines for IT functions and activities, in consultation with the Secretary of Commerce and the CIO Council. Provide direction and oversee activities of agencies with respect to the acquisition and use of IT. Report annually to the President and the Congress on IT management. Promote the use of IT by the federal government to improve the productivity, efficiency, and effectiveness of federal programs. Promulgate, in consultation with the Secretary of Commerce, standards and guidelines for federal information systems. Promote agency investments in IT that enhance service delivery to the public, improve cost-effective government operations, and serve other objectives critical to the President. Oversee the effectiveness of, and compliance with, directives issued under section 110 of the Federal Property and Administrative Services Act (which established the Information Technology Fund). Review the federal information system standards setting process, in consultation with the Secretary of Commerce, and report to the President. Direct the use of the Information Technology Fund by the Administrator of General Services. Provide advice and assistance to the Administrator of the Office of Federal Procurement Policy regarding IT acquisition. Coordinate OIRA policies regarding IT acquisition with the Office of Federal Procurement Policy. Consult with leaders in state governments, the private sector, and foreign governments. Oversee the development and implementation of computer system standards and guidance issued by the Secretary of Commerce through the National Institute of Standards and Technology. Ensure that agencies integrate information resources plans, program plans, and budgets for acquisition and use of technology. Provide direction and oversee activities of agencies with respect to the acquisition and use of IT. Designate agencies, as appropriate, to be executive agents for governmentwide acquisitions of IT. Promote the use of IT by the federal government to improve the productivity, efficiency, and effectiveness of federal programs. Compare agency performance in using IT. Encourage use of performance- based management in complying with IT management requirements. Establish minimum criteria within 1 year of enactment to be used for independent evaluations of IT programs and management processes. OMB’s Current Functionsrespect to the performance of investments made in IT. Direct agencies to develop capital planning processes for managing major IT investments. Services with regard to the provision of any information resources-related services for or on behalf of agencies, including the acquisition or management of telecommunications or other IT or services. Direct agencies to analyze private sector alternatives before making an investment in a new information system. Direct the use of the Information Technology Fund by the Administrator of General Services. Direct agencies to undertake an agency mission reengineering analysis before making significant investments in IT to support these missions. Oversee the effectiveness of, and compliance with, directives issued under section 110 of the Federal Property and Administrative Services Act (which established the Information Technology Fund). Oversee the development and implementation of computer system standards and guidance issued by the Secretary of Commerce through the National Institute of Standards and Technology. Designate agencies, as appropriate, to be executive agents for governmentwide acquisitions of IT. Compare agency performance in using IT. Encourage use of performance- based management in complying with IT management requirements. Evaluate agency practices with respect to the performance of investments made in IT. Direct agencies to develop capital planning processes for managing major IT investments. OMB’s Current Functionssystem. Conduct pilot projects with selected agencies and nonfederal entities to test alternative policies and practices. Assess experiences of agencies, state and local governments, international organizations, and the private sector in managing IT. Provide leadership in the innovative use of technology by agencies through support of experimentation, testing, and adoption of innovative concepts and technologies, particularly with regard to multi- agency initiatives. Direct agencies to undertake an agency mission reengineering analysis before making significant investments in IT to support these missions. Conduct pilot projects with selected agencies and nonfederal entities to test alternative policies and practices. Provide leadership in the innovative use of technology by agencies through support of experimentation, testing, and adoption of innovative concepts and technologies, particularly with regard to multi- agency initiatives. Ensure the efficiency and effectiveness of interagency IT initiatives. Identify opportunities and coordinate major multiagency IT initiatives. Assess experiences of agencies, state and local governments, international organizations, and the private sector in managing IT. Ensure the efficiency and effectiveness of interagency IT initiatives. Issue guidance to agencies regarding interagency and governmentwide IT investments to improve the accomplishment of common missions and for the multiagency procurement of commercial IT items. Apply capital planning, investment control, and performance management requirements to national security systems to the extent practicable. Consult with the heads of agencies that operate national security systems. Issue guidance to agencies regarding interagency and governmentwide IT investments to improve the accomplishment of common missions and for the multiagency procurement of commercial IT items. Consult with the heads of agencies that operate national security systems. Review agency collections of information to reduce paperwork burdens on the public. Advise the Director of OMB on paperwork reduction. Apply capital planning, investment control, and performance management requirements to national security systems to the extent practicable. Provide advice and assistance to agencies and to the Director of OMB to promote efficient collection of information and the reduction of paperwork burdens on the public. OMB’s Current FunctionsProvide direction and oversee activities of agencies with respect to privacy, confidentiality, security, disclosure, and sharing of information. Advise the Director of OMB on privacy, confidentiality, security, disclosure, and sharing of information. Provide direction and oversee activities of agencies with respect to privacy, confidentiality, security, disclosure, and sharing of information. Develop and oversee the implementation of policies, principles, standards, and guidelines on privacy, confidentiality, security, disclosure and sharing of agency information. Develop and oversee the implementation of policies, principles, standards, and guidelines on privacy, confidentiality, security, disclosure and sharing of agency information. Oversee and coordinate compliance with the Privacy Act, the Freedom of Information Act, the Computer Security Act, and related information management laws. Oversee and coordinate compliance with the Privacy Act, the Freedom of Information Act, the Computer Security Act, and related information management laws. Require federal agencies, consistent with the Computer Security Act, to identify and afford security protections commensurate with the risk and magnitude of the harm resulting from the loss, misuse, or unauthorized access to or modification of agency information. Require federal agencies, consistent with the Computer Security Act, to identify and afford security protections commensurate with the risk and magnitude of the harm resulting from the loss, misuse, or unauthorized access to or modification of agency information collected or maintained. Review agency computer security plans required by the Computer Security Act. Oversee agency compliance with the Privacy Act. Establish governmentwide policies for promoting risk-based management of information security as an integral component of each agency’s business operations. Direct agencies to use best security practices, develop an agencywide security plan, and apply information security requirements throughout the information system life cycle. Review agency computer security plans required by the Computer Security Act. Oversee agency compliance with the Privacy Act. OMB’s Current FunctionsProvide direction and oversee activities of agencies with respect to records management activities. Advise the Director of OMB on records management. Provide direction and oversee activities of agencies with respect to records management activities. Provide advice and assistance to the Archivist of the United States and the Administrator of General Services to promote coordination of records management with information resources management requirements. Provide advice and assistance to the Archivist of the United States and the Administrator of General Services to promote coordination of records management with information resources management requirements. Review agency compliance with requirements and regulations. Review agency compliance with requirements and regulations. Oversee the application of records management policies, principles, standards, and guidelines in the planning and design of information systems. Provide direction and oversee activities of agencies with respect to statistical activities. Advise the Director of OMB on statistical policy and coordination. Oversee the application of records management policies, principles, standards, and guidelines in the planning and design of information systems. Provide direction and oversee activities of agencies with respect to statistical activities. Coordinate the activities of the federal statistical system. Coordinate the activities of the federal statistical system. Ensure that agency budget proposals are consistent with systemwide priorities for maintaining and improving the quality of federal statistics. Consult with the Director of OMB to ensure that agency budget proposals are consistent with systemwide priorities for maintaining and improving the quality of federal statistics. Develop and oversee governmentwide statistical policies, principles, standards, and guidelines. Develop and oversee governmentwide statistical policies, principles, standards, and guidelines. Evaluate statistical program performance and agency compliance with governmentwide statistical policies, principles, standards, and guidelines. Evaluate statistical program performance and agency compliance with governmentwide statistical policies, principles, standards, and guidelines. Promote the sharing of information collected for statistical purposes. Promote the sharing of information collected for statistical purposes. Coordinate U.S. participation in international statistical activities. OMB’s Current Functionsinternational statistical activities. Establish an Interagency Council on Statistical Policy, headed by an appointed chief statistician. Establish an Interagency Council on Statistical Policy, headed by an appointed chief statistician. Provide opportunities for training in statistical policy. Provide opportunities for training in statistical policy. H.R. 4670 specifically authorizes the CIO to advise the Director of OMB to “ensure effective implementation of the functions and responsibilities assigned under chapter 35 of title 44, United States Code.” These functions include electronic records (through the Government Paperwork Elimination Act of 1998), information dissemination, information resources management policy, information technology management, paperwork reduction, privacy and security, records management, and statistical policy and coordination. (512023) | Pursuant to a congressional request, GAO discussed the creation of a federal chief information officer (CIO), focusing on the: (1) structure and responsibilities of existing state and foreign governmentwide CIO models; (2) federal CIO approaches proposed by two bills; and (3) type of leadership responsibilities that a federal CIO should possess. GAO noted that: (1) GAO has not evaluated the effectiveness of state and foreign government CIOs or equivalent positions--however, these positions appear to apply some of the same principles outlined in GAO's CIO executive guide; (2) state CIO are usually in charge of developing statewide information technology (IT) plans and approving statewide IT standards, budgets, personnel classifications, salaries, and resource acquisitions; (3) national governments in other countries have also established a central IT coordinating authority and have different implementation approaches in doing so; (4) Congress is considering legislation to establish a federal CIO; (5) two proposals--H.R. 4670, the Chief Information Officer of the United States Act of 2000, and H.R. 5024, the Federal Information Policy Act of 2000--share a common call for central IT leadership from a federal CIO, although they differ in how the roles, responsibilities, and authorities of the position would be established; (6) regardless of approach, strong and effective central information resources and technology management leadership is needed in the federal government; (7) a central focal point such as a federal CIO can play the essential role of ensuring that attention in these areas is sustained; (8) although the respective departments and agencies should have the primary responsibility and accountability to address their own issues--and both bills maintain these agency roles--central leadership has the responsibility to keep everybody focused on the big picture by identifying the agenda of governmentwide issues needing attention and ensuring that related efforts are complementary rather than duplicative; (9) another task facing central leadership is serving as a catalyst and strategist to prompt agencies and other critical players to come to the table and take ownership for addressing the agenda of governmentwide information resources and technology management issues; (10) a federal CIO could provide sponsorship, direction, and sustained focus on the major challenges the government is facing in areas such as critical infrastructure protection and security, e-government, and large-scale IT investments; and (11) consensus has not been reached within the federal community on the need for a federal CIO. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Operation Desert Storm demonstrated that the U.S. military and other allied forces have limited capability against theater ballistic missiles. In fact, U.S. defensive capability is limited to weapons that defend against missiles nearing the end of their flight, such as the Patriot. No capability currently exists to destroy missiles in the boost phase. Consequently, DOD is expending considerable resources to develop the ABL’s capability to intercept missiles in their boost phase. In simple terms, the ABL program will involve placing various components, including a powerful multimegawatt laser, a beam control system, and related equipment, in a Boeing 747-400 aircraft and ensuring that all the components work together to detect and destroy enemy missiles in their boost phase. In November 1996, the Air Force awarded a 77-month program definition and risk reduction contract to the team of Boeing, TRW, and Lockheed Martin. Under the contract, Boeing is to produce and modify the 747-400 aircraft and integrate the laser and the beam control system with the aircraft, TRW will develop the multimegawatt Chemical Oxygen Iodine Laser (COIL) and ground support systems, and Lockheed Martin will develop the beam control system. The various program components are in the early phases of design and testing. One prototype ABL will be produced and used in 2002 to shoot down a missile in its boost phase. If this demonstration is successful, the program will move into the engineering and manufacturing development phase in 2003. Production is scheduled to begin about 2005. Initial operational capability of three ABLs is scheduled for 2006; full operational capability of seven ABLs is scheduled for 2008. The ABL is a complex laser weapon system that is expected to detect an enemy missile shortly after its launch, track the missile’s path, and destroy the missile by holding a concentrated laser beam on it until the beam’s heat causes the pressurized missile casing to crack, in turn causing the missile to explode and the warhead to fall to earth well short of its intended target. The ABL’s opportunity to shoot down a missile lasts only from the time the missile has cleared the cloud tops until its booster burns out. That interval can range from 30 to 140 seconds, depending on missile type. During that interval, the ABL is expected to detect, track, and destroy the missile, as shown in figure 1. The first step—detection—is to begin when the ABL’s infrared search sensor detects a burst of heat that could be fire from a missile’s booster.Because clouds block the view of the infrared search sensor, the sensor cannot detect this burst of heat until the missile has broken through the cloud tops—assumed to be at about 38,500 feet. The sensor detects the heat burst about 2 seconds after the missile has cleared the cloud tops. (In the absence of clouds, detection can occur earlier.) The ABL would then use information from the sensor to verify that the heat burst is the plume of a missile in its boost phase and would then move the telescope located in the nose of the aircraft toward the coordinates identified by the infrared sensor. The second step—tracking—is to be performed sequentially and with increasing precision by several ABL devices. The first of these tracking devices, the acquisition sensor, is to take control of the telescope, center the plume in the telescope’s field of view, and hand off that information to the next device, the plume tracker. The plume tracker, having taken control of the telescope, is to track and determine the shape of the missile plume and use this information to estimate the location of the missile’s body and project a beam from the track illuminator laser to light up the nose cone of the missile. The plume tracker is then to hand its information, and control of the telescope, to the final tracking device, the fine tracker. The fine tracker is to measure the effects of turbulence and determine the aimpoint for the beacon laser and, ultimately, for the COIL laser. The reflected light from the illuminator laser provides information that is to be used to operate a sophisticated mirror system (known as a fast-steering mirror) that helps to compensate for optical turbulence by stabilizing the COIL beam on the target. The reflected light from the beacon laser provides information that is to be used to operate deformable mirrors that will further compensate for turbulence by shaping the COIL beam. With the illuminator and beacon lasers still operating, the fine tracker is to determine the aimpoint for the COIL laser. The COIL laser is to be brought to full power and focused on the aimpoint. At this point, the final step in the sequence—missile destruction—is to begin. During this final step, a lethal laser beam is held on the missile. The length of time that the beam must dwell on the missile will depend on turbulence levels and the missile type, hardness, range, and altitude. Throughout the lethal dwell, the illuminator and beacon lasers are to continue to operate, providing the information to operate the fast-steering and deformable mirrors. Under the intense heat of the laser beam, which is focused on an area about the size of a basketball, the missile’s pressurized casing fractures, and then explodes, destroying the missile. The ABL is expected to operate from a central base in the United States and be available to be deployed worldwide. The program calls for a seven-aircraft fleet, with five aircraft to be available for operational duty at any given time. The other two aircraft are to be undergoing modifications or down for maintenance or repair. When the ABLs are deployed, two aircraft are to fly, in figure-eight patterns, above the clouds at about 40,000 feet. Through in-flight refueling, which is to occur between 25,000 and 35,000 feet, and rotation of aircraft, two ABLs will always be on patrol, thus ensuring 24-hour coverage of potential missile launch sites within the theater of operations. The ABLs are intended to operate about 90 kilometers behind the front line of friendly troops but could move forward once air superiority has been established in the theater of operations. When on patrol, the ABLs are to be provided the same sort of fighter and/or surface-to-air missile protection provided to other high-value air assets, such as the Airborne Warning and Control System and the Joint Surveillance Target Attack Radar System. A key factor in determining whether the ABL will be able to successfully destroy a missile in its boost phase is the Air Force’s ability to predict the levels of turbulence that the ABL is expected to encounter. Those levels are needed to define the ABL’s technical requirements for turbulence. To date, the Air Force has not shown that it can accurately predict the levels of turbulence the ABL is expected to encounter or that its technical requirements regarding turbulence is appropriate. The type of turbulence that the ABL will encounter is referred to as optical turbulence. It is caused by temperature variations in the atmosphere. These variations distort and reduce the intensity of the laser beam. Optical turbulence can be measured either optically on non-optically. Optical measurements are taken by transmitting laser beams from one aircraft to instruments on board another aircraft at various altitudes and distances. Non-optical measurements of turbulence are taken by radar or by temperature probes mounted on balloons or on an aircraft’s exterior. The Air Force’s ABL program office has not determined whether non-optical measurements of turbulence can be mathematically correlated with optical measurements. Without demonstrating that such a correlation exists, the program office cannot ensure that the non-optical measurements of turbulence that it is collecting are useful in predicting the turbulence likely to be encountered by the ABL’s laser beam. Concern about turbulence measurements was expressed by a DOD oversight office nearly 1 year ago. In November 1996, during its milestone 1 review of the ABL program, the Defense Acquisition Board directed the program office to develop a plan for gathering additional data on optical turbulence and present that plan to a senior-level ABL oversight team for approval. The Board also asked the program office to “demonstrate a quantifiable understanding of the range and range variability due to optical turbulence and assess operational implications.” This requirement was one of several that the Air Force has been asked to meet before being granted the authority to proceed with development of the ABL. That authority-to-proceed decision is scheduled for June 1998. In February 1997, the program office presented to the oversight team a plan for gathering only non-optical data. The oversight team accepted the plan but noted concern that the plan was based on a “fundamental assumption” of a correlation between non-optical and optical measurements. If that assumption does not prove to be accurate, according to the oversight team, the program office will have to develop a new plan to gather more relevant (i.e., optical rather than non-optical) measurements. Accordingly, the oversight team required that the program office include in its data-gathering plan a statement agreeing to demonstrate the correlation between the non-optical and optical measurements. Program officials said they plan to demonstrate that correlation in the summer of 1997. To establish that a correlation exists, the program office plans to use optical and non-optical turbulence measurements taken during a 1995 Air Force project known as Airborne Laser Extended Atmospheric Characterization Experiment (ABLE ACE). Optical measurements were made by transmitting two laser beams from one aircraft to instruments aboard another aircraft at distances from 13 to 198 kilometers and at altitudes from 39,000 to 46,000 feet. These measurements provided the data used to calculate the average turbulence strengths encountered by the beams over these distances. The ABLE ACE project also took non-optical measurements of turbulence using temperature probes mounted on the exterior of one of the aircraft. Rather than taking measurements over the path of a laser beam between two aircraft, as with the optical measurements, the probes measured temperature variations of the air as the aircraft flew its route. Opinions vary within DOD about whether a correlation between optical and non-optical turbulence measurements can be established. Some atmospheric experts, who are members of the program office’s Working Group on Atmospheric Characterization, criticized the program office’s plan for collecting additional atmospheric data because it did not include additional optical measurements. Minutes from a Working Group meeting indicated that some of these experts believed that “current scientific understanding is far too immature” to predict optical effects from non-optical point measurements. In contrast, the chief scientist for the ABL program said it would be surprising if the two measurements were not directly related; he added that evaluations at specific points in the ABLE ACE tests have already indicated a relationship. According to the chief scientist, it would be prudent for the program office to continue to collect non-optical data while it completes its in-depth analysis of the ABLE ACE data. According to a DOD headquarters official, because the ABL is an optical weapon, gathering non-optical data without first establishing their correlation to optical data is risky. The official concluded that, if the program office cannot establish this correlation, turbulence data will have to be gathered through optical means. The ABL program office also has not shown that the turbulence levels in which the ABL is being designed to operate are realistic. Available optical data on optical turbulence indicate that the turbulence the ABL may encounter could be four times greater than the design specifications. These higher levels of optical turbulence would decrease the effective range of the ABL system. The ABL program office set the ABL’s design specifications for optical turbulence at a level twice that, according to a model, the ABL would likely encounter at its operational altitude. This model was based on research carried out in 1984 for the ground based laser/free electron laser program, in which non-optical measurements were taken by 12 balloon flights at the White Sands Missile Range in New Mexico. Each of the 12 flights took temperature measurements at various altitudes. These measurements were then used to develop a turbulence model that the program office refers to as “clear 1 night.” The clear 1 night model shows the average turbulence levels found at various altitudes. The ABL is being designed to operate at about 40,000 feet, so the turbulence expected at that level became the starting point for setting the design specifications. To ensure that the ABL would operate effectively at the intended ranges, for design purposes, the program office doubled the turbulence levels indicated by its clear 1 night model. The program office estimated that the ABL could be expected to encounter turbulence at or below that level 85 percent of the time. This estimate was based on the turbulence measured by 63 balloon flights made at various locations in the United States during the 1980s. When the ABL design specifications were established, the program office had very little data on turbulence. However, more recent data, accumulated during the ABLE ACE program, indicated that turbulence levels in many areas were much greater than those the ABL is being designed to handle. According to DOD officials, if such higher levels of turbulence are encountered, the effective range of the ABL system would decrease, and the risk that the ABL system would be underdesigned for its intended mission would increase. DOD officials also indicated that a more realistic design may not be achievable using current state-of-the-art technology. ABLE ACE took optical measurements in various parts of the world, including airspace over the United States, Japan, and Korea. According to the program office and Office of the Secretary of Defense (OSD) analyses of optical measurements taken during seven ABLE ACE missions, overall turbulence levels exceeded the design specifications 50 percent of the time. For the two ABLE ACE missions flown over Korea, the measurements indicated turbulence of up to four times the design specifications. Additionally, according to officials in OSD, ABLE ACE data were biased toward benign, low-turbulent, nighttime conditions. According to these officials, turbulence levels may be greater in the daytime. Developing and integrating a weapon-level laser, a beam control system, and the many associated components and software systems into an aircraft are unprecedented challenges for DOD. Although DOD has integrated a weapon-level laser and beam control system on the ground at White Sands Missile Range, it has not done so in an aircraft environment. Therefore, it has not had to contend with size and weight restrictions, motion and vibrations, and other factors unique to an aircraft environment. The COIL is in the early development stage. The Air Force must build the laser to be able to contend with size and weight restrictions, motion and vibrations, and other factors unique to an aircraft environment, yet be powerful enough to sustain a killing force over a range of at least 500 kilometers. It is to be constructed in a configuration that links modules together to produce a single high-energy beam. The laser being developed for the program definition and risk reduction phase will have six modules. The laser to be developed for the engineering and manufacturing development phase of the program will have 14 modules. To date, one developmental module has been constructed and tested. Although this developmental module exceeded its energy output requirements, it is too heavy and too large to meet integration requirements. The module currently weighs about 5,535 pounds and must be reduced to about 2,777 pounds. The module’s width must also be reduced by about one-third. To accomplish these reductions, many components of the module may have to be built of advanced materials, such as composites. The ABL aircraft, a Boeing 747-400 Freighter, will require many modifications to allow integration of the laser, beam control system, and other components. A significant modification is the installation of the beam control turret in the nose of the aircraft. The beam control turret is to be used for acquisition, tracking, and pointing actions used in destroying a missile. Consequently, the location of the turret is critical to the success of the ABL. Issues associated with the turret include the decreased aircraft performance resulting from the additional drag on the aircraft; the interaction of the laser beam with the atmosphere next to the turret, which can cause the laser beam to lose intensity; and vibrations from the operation of the aircraft that affect the accuracy of pointing the beam control turret. The contractor has conducted wind tunnel tests of these expected effects for three different turret locations and found that installing the turret in the nose of the aircraft would cause the fewest negative effects. However, the operational effectiveness of the beam control turret will not be known until it undergoes additional testing in 2002 in an operationally realistic environment. The laser exhaust system is another critical modification. The system must prevent the hot corrosive laser exhaust from damaging the bottom of the aircraft and other structural components made of conventional aluminum. The exhaust created by the laser will reach about 500 degrees Fahrenheit when it is ejected through the laser exhaust system on the bottom of the aircraft. This exhaust system must also undergo additional testing on the aircraft in 2002 to determine its operational effectiveness. Integrating the beam control system with the aircraft also poses a challenge for the Air Force. The Air Force must create a beam control system, consisting of complex software programs, moving telescopes, and sophisticated mirrors, that will compensate for the optical turbulence in which the system is operating and control the direction and size of the laser beam. In addition, the beam control system must be able to tolerate the various kinds of motions and vibrations that will be encountered in an aircraft environment. In deciding the on-board location of the beam control system’s components, the Air Force used data gathered by an extensive study of aircraft vibrations on the 747-400 Freighter. The beam control components are expected to be located in those areas of the aircraft that experience less intense vibrations and, to the extent possible, be shielded from vibrations and other aircraft motion. To date, the Air Force has not demonstrated how well a beam control system of such complexity can operate on an aircraft. The contractor has modeled the ABL’s beam control system on a brassboard but has not tested it on board an aircraft. The ABL program is a revolutionary weapon system concept. Although DOD has a long history with laser technologies, the ABL is its first attempt to design, develop, and install a multimegawatt laser on an aircraft. As such, the concept faces a number of technological challenges. A fundamental challenge is for the Air Force to accurately and reliably predict the level of optical turbulence that the ABL will encounter and then design the system to operate effectively in that turbulence. The Air Force will not have resolved that challenge until it has demonstrated whether there is a reliable correlation between its non-optical and optical turbulence measurements, or, should such a correlation not exist, gather additional optical data, which may delay the ABL program. Whether relevant and reliable data are confirmed through correlation or by additional optical measurements, the data are critical in assessing the appropriateness of the design specifications for turbulence. If the specifications need to be set higher, that should be done as soon as possible. Therefore, we recommend that the Secretary of Defense direct the Secretary of the Air Force to take the following actions: Demonstrate as quickly as possible, but no later than the time when DOD decides whether to grant the ABL program the authority to proceed (currently scheduled for June 1998), the existence of a correlation between the optical and non-optical turbulence data. If a correlation between optical and non-optical data cannot be established, the Air Force should be required to gather additional optical data to accurately predict the turbulence levels the ABL may encounter, before being given the authority to proceed with the program as planned. Validate the appropriateness of the design specification for turbulence based on reliable data that are either derived from a correlation between optical and non-optical data or obtained through the collection of additional optical data. DOD concurred with both of our recommendations. DOD’s comments are reprinted in appendix I. DOD also provided technical comments that we incorporated in this report where appropriate. We reviewed and analyzed DOD, Air Force, ABL program office, and contractor documents and studies regarding various aspects of the ABL program. We discussed the ABL program with officials of the Office of the Under Secretary of Defense (Comptroller); the Office of the Under Secretary of Defense (Acquisition and Technology); the Air Combat Command; the ABL program office; the Air Force’s Phillips Laboratory; and the ABL Contractor team of Boeing, TRW, and Lockheed Martin. We also discussed selected aspects of the ABL program with a consultant to the ABL program office. We conducted our review from September 1996 to August 1997 in accordance with generally accepted government auditing standards. We are sending copies of this report to the congressional committees that have jurisdiction over the matters discussed and to the Secretary of Defense; the Secretary of the Air Force; and the Director, Office of Management and Budget. We will make copies available to others on request. Please contact me at (202) 512-4841 if you or your staff have questions concerning this report. Major contributors to this report were Steven Kuhta, Ted Baird, Suzanne MacFarlane, and Rich Horiuchi. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the status of the Airborne Laser (ABL) program, focusing on: (1) the way in which the ABL is expected to change theater missile defense; (2) assurances that the ABL will be able to operate effectively in the levels of optical turbulence that may be encountered in the geographical areas in which the system might be used; and (3) the technical challenges in developing an ABL system that will be compatible with the unique environment of an aircraft. GAO noted that: (1) the ABL program is the Department of Defense's (DOD) first attempt to design, develop, and install a multimegawatt laser on an aircraft and is expected to be DOD's first system to intercept missiles during the boost phase; (2) a key factor in determining whether the ABL will be able to successfully destroy a missile in its boost phase is the Air Force's ability to predict the levels of turbulence that the ABL is expected to encounter; (3) the Air Force has not shown that it can accurately predict the levels of turbulence the ABL is expected to encounter or that its technical requirements regarding turbulence are appropriate; (4) because ABL is an optical weapons system, only optical measurements can measure the turbulence that will actually be encountered by the ABL laser beam; (5) the Air Force has no plans to take additional optical measurements and instead plans to take additional non-optical measurements to predict the severity of optical turbulence the ABL will encounter; (6) to ensure that the non-optical measurements can be validly applied to the ABL program, the Air Force must determine whether the non-optical measurements can be correlated to optical measurements; (7) until the Air Force can verify that its predicted levels of optical turbulence are valid, it will not be able to validate the ABL's design specifications for overcoming turbulence; (8) the Air Force has established a design specification for the ABL that is based on modelling techniques; (9) data collected by the program office indicate that the levels of turbulence that ABL may encounter could be four times greater than the levels in which the system is being designed to operate; (10) DOD officials indicated that a more realistic design may not be achievable using a current state-of-the-art technology; (11) in addition to the challenges posed by turbulence, developing and integrating a laser weapon system into an aircraft pose many technical challenges for the Air Force; (12) the Air Force must build a new laser that is able to contend with size and weight restriction, motion and vibrations, and other factors unique to an aircraft environment and yet be powerful enough to sustain a killing force over a range of at least 500 kilometers; (13) the Air Force must create a beam control system that must compensate for the optical turbulence in which the system is operating and control the direction and size of the laser beam; and (14) because these challenges will not be resolved for several years, it is too early to accurately predict whether the ABL program will evolve into a viable missile defense system. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Through fiscal year 1998, about $172 million has been allocated to the ACTD program and 48 projects have been approved. DOD’s budget request for fiscal year 1999 for the ACTD program is $116.4 million. An additional 10 to 15 projects are expected to be funded in fiscal year 1999. Under the current ACTD program, DOD builds prototypes to assess the military utility of mature technologies, which are used to reduce or avoid the time and effort usually devoted to technology development. Demonstrations that assess a prototype’s military utility are structured to be completed within 2 to 4 years and require the participation of field users (war fighters). ACTD projects are not acquisition programs. The ACTD program seeks to provide the war fighter with the opportunity to assess a prototype’s capability in realistic operational scenarios. From this demonstration, the war fighter can refine operational requirements, develop an initial concept of operation, and make a determination of the military utility of the technology before DOD decides whether the technology should enter into the normal acquisition process. Not all projects will be selected for transition into the normal acquisition process. The user can conclude that the technology (1) does not have sufficient military utility and that acquisition is not warranted or (2) has sufficient utility but that additional procurement is not necessary. Of the 11 ACTD projects completed as of August 1998, 2 were found to have insufficient utility to proceed further, 8 were found to have military utility but no further procurement was found to be needed at the time, and 1 was found to have utility and has transitioned to the normal acquisition process. ACTD funding is to be used to procure enough prototypes to conduct the basic demonstration of military utility. At the conclusion of the basic demonstration, ACTD projects are expected to provide a residual operational capability for the war fighter. Under the current practice, ACTD funding is also to be available to support continued use of ACTD prototypes that have military utility for a 2-year, post-demonstration period. The 2 years of funding is to support continued use by an operational unit and provide the time needed to separately budget for the acquisition of additional systems. Further, if the ACTD prototypes—such as missiles—will be consumed during the basic demonstration, additional prototypes are to be procured. As stated in the ACTD guidance, a key to successfully exploiting the results of the demonstration is to enter the appropriate phase of acquisition without loss of momentum. ACTDs are intended to shorten the acquisition cycle by reducing or eliminating technology development and maturation activities during the normal acquisition process. Further, DOD can concentrate more on technology integration and demonstration activities. Time and effort usually devoted to technology development can be significantly reduced or avoided and the subsequent acquisition process reduced accordingly, if the project is deemed to have sufficient military utility. ACTD candidates are nominated from a variety of sources within the defense community, including the Commanders in Chief, the Joint Chiefs of Staff, the Office of the Secretary of Defense agencies, the services, and the research and development laboratories. The candidates are then reviewed and assessed by staff from the Office of the Deputy Under Secretary of Defense (Advanced Technology). After this initial screening, the remaining candidates are further assessed by a panel of technology experts. The best candidates are then submitted to the Joint Requirements Oversight Council, which assesses their priority. The final determination of the candidates to be funded is made within the Office of the Deputy Under Secretary of Defense (Advanced Technology), with final approval by the Under Secretary of Defense (Acquisition and Technology). By limiting consideration to prototypes that feature mature technology, the ACTD program avoids the time and risks associated with technology development, concentrating instead on technology integration and demonstration activities. The information gained through the demonstration of the mature technology could provide a good jump start to the normal acquisition process, if the demonstration shows that the technology has sufficient military value. Time and effort usually devoted to technology development could be reduced or avoided and the acquisition process shortened accordingly. Program officials stated that they have a mechanism in place to ensure that only those projects using mature technology are allowed to become ACTDs. These officials explained that an ACTD candidate’s technology is assessed by high-ranking representatives from the services and the DOD science and technology community before candidates are selected. Program personnel stated that determining technology maturity is important before a candidate is selected because ACTD program funding is not intended to be used for technology development. According to program guidance, the ACTD funding is to be used for (1) costs incurred when existing technology programs are reoriented to support ACTD, (2) costs to procure additional assets for the basic ACTD demonstration, and (3) costs for technical support for 2 years of field operations following the basic ACTD demonstration. We were told that no ACTD money was to be used for technology development activities. However, the project selection process does not ensure that only mature technologies enter the ACTD program. We found examples where immature technologies were selected and technology development was taking place after the approval and start of the ACTD program. The current operations manager of the Combat Identification project, which began in fiscal year 1996, told us that one of his major concerns has been that some of the ACTD funding was being used for technology development, and not exclusively used for designing and implementing the assessment. However, during the ACTD project, technical or laboratory testing was still necessary to evaluate the acceptability of many of the 12 technologies included in the initial project. Eventually, 6 of the 12 technologies had to be terminated. According to the demonstration manager, 2 of the 6 technologies were terminated because they were immature. According to the manager, that is one of the reasons the project is currently behind schedule. Another example of the inclusion of immature technology occurred in the Outrider Unmanned Aerial Vehicle project. According to the management plan for the project, one of the individual technologies to be incorporated into the vehicle was a heavy fuel engine. According to a program official, it was later deemed that this individual technology was too immature and an alternate technology had to be used. However, trying to use this immature technology has already caused schedule slippage and cost overruns in the ACTD project. To complete the basic demonstration within the prescribed 2 to 4 year period, ACTDs typically use early prototypes. If the demonstrated technology is deemed to have sufficient military utility, many ACTD projects will still need to enter the normal acquisition process to complete product and concept development and testing to determine, for example, whether the system is producible and can meet the user’s suitability needs. These attributes of a system go beyond the ACTD’s demonstration of military utility to address whether the item can meet the full military requirement. Commercial items that do not require any further development could proceed directly to production. However, other non-software related ACTDs should enter the engineering and manufacturing development phase to proceed with product and concept development and testing. According to ACTD guidance, if further significant development is needed, a system might enter the development portion of the engineering and manufacturing development phase. However, the guidance states that, if the capability is adequate, the ACTD can directly enter production. The guidance does not specifically define what is considered an “adequate capability” to allow an ACTD system to enter low-rate production. In 1994, we reported on numerous instances of weapon systems that began production prematurely and later experienced significant operational effectiveness or suitability problems. In our best practices report, we reported that typically DOD programs allowed much more technology development to continue into the product development phase than is the case in commercial practices. Turbulence in program outcomes—in the form of production problems and associated cost and schedule increases—was the predictable consequence of DOD’s actions. In contrast, commercial firms gained more knowledge about a product’s technology, performance, and producibility much earlier in the product development process. Commercial firms consider not having this type of knowledge early in the acquisition process an unacceptable risk. In responding to that report, the Secretary of Defense stated that DOD is vigorously pursuing the adoption of such business practices. Specifically, he stated that DOD has taken steps to separate technology development from product development through the use of ACTDs. The ACTD guidance and DOD’s current practice do not appear to reflect this emphasis. In the case of the Predator ACTD, the one ACTD that has proceeded into production, DOD decided to enter the technology into production before proceeding with product and concept development and testing, thereby accepting programmatic risks that could offset the schedule and other benefits gained through the ACTD process. In the early operational assessment of the Predator’s ACTD demonstration, the Director, Operational Test and Evaluation, did not make a determination of the system’s potential operational effectiveness or suitability. However, the system was found to be deficient in several areas, including mission reliability, documentation, and pilot training. The assessment also noted that the ACTD demonstration was not designed to evaluate several other areas such as system survivability, supportability, target location accuracy, training, and staffing requirements. The basic ACTD demonstration may have clarified the Predator’s military utility but it did not demonstrate its system requirements or its suitability. Thus, instead of using the knowledge acquired during the demonstration to complete the Predator’s development through the product and concept development and testing stages of acquisition, DOD allowed it to directly enter production. DOD’s practice is to procure sufficient ACTD prototypes to provide a 2-year residual capability. When it determines that the original prototypes will be consumed during the basic demonstration, additional prototypes are procured for potential use after the basic ACTD demonstration. However, these additional assets—like the basic demonstration prototypes—have not been independently tested to determine their effectiveness and suitability. Procuring additional ACTD prototypes before product and concept development and testing is completed risks wasting resources on the procurement of items that may not work as expected or may not have sufficient military utility. Representatives from the service test agencies did not support this practice and agreed that it had the potential for problems. Without a meaningful independent assessment of a product’s suitability, effectiveness, and survivability, users cannot be assured that it will operate as intended and is supportable. Congress has expressed concern about the amount of equipment being procured beyond what is needed to conduct the basic ACTD demonstration. Its concern is that DOD is making an excessive commitment to production before military utility is demonstrated and before appropriate concepts of operation are developed. For example, DOD plans to procure 192 Enhanced Fiber Optic Guided missiles at an estimated cost of $27 million and 144 Line-of-Sight Anti-Tank missiles at an estimated cost of $28 million beyond the quantities of missiles required for the ACTD demonstrations—64 and 30 missiles, respectively. The production of these additional missiles will follow the production of the missiles needed for the basic demonstration and will continue on a regular basis throughout the 2-year, post-demonstration period. If the prototypes are deemed to have sufficient military utility, the service involved will be expected to fund the production of additional missiles beyond these quantities. By establishing a regular pattern of procurement in this way, DOD risks committing to a continuing production program before a determination is made about the technology’s military utility and before there is assurance that the system will meet validated requirements and be supportable. The strength of the ACTD program is in conducting basic demonstrations of mature technology in military applications before entering the normal acquisition process. This practice could significantly reduce or eliminate the time and effort needed for technology development from the acquisition process. For this to occur, it is essential that DOD use only mature technology in its ACTDs. DOD’s criteria for selecting technologies for ACTD candidates should be clarified to ensure the selection of mature technology with few, if any, exceptions. Further, ACTDs may not, by themselves, result in an effective and safe deployment of military capability. It is important that product and concept development as well as test and evaluation processes be allowed to proceed before the service commits to the production of the demonstrated technology. If an ACTD project is shown to have military value, the normal acquisition processes can and should be tailored—but not bypassed—before DOD begins production. Lastly, emphasizing the need to complete concept and product development and testing before procuring more items than needed for the basic demonstration would reduce the risk of prematurely starting production. We recommend that the Secretary of Defense clarify the ACTD program guidance to (1) ensure the use of mature technology with few, if any, exceptions and (2) describe when transition to the development phase of the acquisition cycle is necessary and the types of development activity that may be appropriate. Further, we recommend that the Secretary of Defense limit the number of prototypes to be procured to the quantities needed for early user demonstrations of mature technology until the item’s product and concept development and testing have been completed. “. . . new technologies proposed for incorporation into an ACTD should not be in the 6.1 (basic research) or 6.2 (applied research) budget categories. Furthermore, the technologies must have been successfully demonstrated at the subsystem or component level and at the required performance level prior to the start of the ACTD.” While this guidance is improved over previous versions, the new guidance permits the selection of immature technology—even as the primary or core technology—provided that it is demonstrated prior to the ACTD demonstration. Also, some recent ACTD projects have been approved without the technologies having been identified. Moreover, the new guidance goes on to describe several types of exceptions under which immature technologies may be permitted to be used in an ACTD. As our report states, the use of immature technologies has delayed programs and we continue to believe DOD needs to focus the ACTD program on the use of mature technology with few, if any exceptions. DOD also agreed that some but not all ACTDs may require additional product and concept development before proceeding into production. DOD states that a mandatory engineering and manufacturing development phase would not be appropriate for all ACTD projects. We agree, however, the existing ACTD guidance focuses on the transition directly to production and provides too little guidance concerning a possible transition to development. As stated in our recommendation, the guidance should specify when a transition to development may be appropriate and the kinds of developmental activities that may be appropriate. Finally, DOD agreed that the number of ACTD prototypes to be procured should be limited until the Under Secretary can confirm that sufficient testing has been satisfactorily completed to support any additional procurement. We agree with DOD that test results should form the basis for starting limited procurement. However, DOD’s equating a determination of military utility (based on an ACTD demonstration) with a determination of a system’s readiness to begin production is inappropriate because production decisions require more testing data. We have long held the view and have consistently recommended that DOD use extreme caution to avoid premature commitments to production. To determine the adequacy of the ACTD program’s selection criteria in assessing technology maturity and guidance for transitioning to the normal acquisition process, we reviewed existing program guidance, published reports, the Office of the Inspector General’s April 1997 ACTD report, and the recommendations of the 1986 Packard commission and the 1996 Defense Science Board. We discussed selection criteria, transitioning to the acquisition process, and all 34 of the individual ACTD programs approved through fiscal year 1997 with representatives from the Office of the Deputy Under Secretary of Defense (Advanced Technology), Washington, D.C.; the Army’s Deputy Chief of Staff for Operations and Plans, Office of Science and Technology Programs, Washington, D.C.; the Air Force’s Director for Operational Requirements, Rosslyn, Virginia; the Navy’s Requirements and Acquisition Support Branch, Washington, D.C.; the Marine Corps’ Combat Development Command Office of Science and Innovation, Quantico, Virginia; the Joint Staff’s Acquisition and Technology Division and Requirements Assessment Integration Division, Washington, D.C.; and the Office of the Commander in Chief, U.S. Atlantic Command, Norfolk, Virginia. We discussed the issue of procuring additional residual assets for early deployment with representatives from DOD’s Office of the Director, Operational Test and Evaluation, Washington, D.C.; the Army’s Test and Evaluation Management Agency, Washington, D.C.; the Army’s Operational Test and Evaluation Command, Alexandria, Virginia; the Marine Corps’ Operational Test and Evaluation Activity, Quantico, Virginia; the Air Force’s Test and Evaluation Directorate, Washington, D.C.; and the Navy’s Commander, Operational Test and Evaluation Force, Norfolk, Virginia. We conducted our review from September 1997 to July 1998 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies to other interested congressional committees; the Secretaries of Defense, the Army, the Air Force, and the Navy; the Commandant of the Marine Corps; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. Please contact me at (202) 512-4841, if you or your staff have any questions concerning this report. The major contributors to this report were Bill Graveline, Laura Durland, and John Randall. The following are GAO’s comments on the Department of Defense’s (DOD) letter, dated August 31, 1998. “. . . new technologies proposed for incorporation into an ACTD should not be in the 6.1 (basic research) or 6.2 (applied research) budget categories. Furthermore, the technologies must have been successfully demonstrated at the subsystem or component level and at the required performance level prior to the start of the ACTD.” “. . .Strategies and approaches are described to facilitate transitioning from an ACTD to the acquisition process as defined in DOD 5000.2R. The suggested approaches are based on lessons learned. The focus of the suggestions are ACTDs that are planned—if successful—to enter the acquisition process at the start of LRIP.” Although there is a basic recognition that the transition to development may be possible, the bulk of the guidance is on how and when to transition to production. As pointed out in the report, the guidance does not describe when a transition to development or what types of development activity may be appropriate. In our view, the guidance needs to be more balanced between the possibility of transition to development and the transition of ACTD projects directly to production. 5. As discussed in the report, the independent operational testing agencies are observers in the ACTD demonstrations and not active participants. While the Office of the Director of Operational Test and Evaluation was an observer during the Predator demonstration, a determination was not made that Predator was potentially effective and suitable. 6. We agree that ACTDs address the technology’s suitability. However, the ACTD focus on suitability is in a very general sense and extensive data is not collected on the system’s reliability, maintainability, and other aspects of suitability needed to support production decisions. 7. As our report states, the Predator was rushed into low-rate initial production prematurely given the limited amount of testing conducted at that time and the problems that were uncovered during that limited testing. 8. DOD’s equating a determination of military utility (based on an ACTD demonstration) with a determination of a system’s readiness to begin production is inappropriate because production decisions require more testing data. During our review, we noted that sufficient information was not obtained from an ACTD demonstration to make a commitment to limited production. Commercial practice would dictate that much more information be obtained about a product’s effectiveness, suitability, producibility, or supportability before such a commitment is made. We believe the ACTD guidance needs to be more balanced and should anticipate that ACTD prototypes may need to conduct more product and concept development and testing prior to production. We have long held the view and have consistently recommended that DOD use extreme caution to avoid premature commitments to production. 9. We are not suggesting that a lengthy development phase be conducted on all ACTD products nor, as DOD appears to suggest, that an ACTD prototype may be ready to start limited production immediately after its basic demonstration. As DOD stated in its intent to establish the ACTD program, we believe the benefit of the ACTD process is in eliminating or reducing technology development, not in making early commitments to production or in postponing product and concept development and testing activities until after production starts. 10. While ACTD demonstrations are performed in operational environments, they are not operational tests. During the course of our work, we held several discussions with officials from the operational test community. Those officials were in favor of the user demonstrations featured in the ACTD program, but none considered those demonstrations as substitutes for operational testing because of their informality, lack of structure, and the lack of a defined requirement by which to measure performance. 11. DOD appears not to recognize the very real possibility that the ACTD demonstration may find the technology in question to have little or no military utility or to be unaffordable in today’s budgetary and security environment. In fact, due to budget constraints, the Army was forced to prioritize its procurement programs, and the planned procurement funding for Enhanced Fiber Optic Guided missiles has been reallocated. 12. While we agree with DOD that test results should form the basis for starting limited procurement, the testing needed goes beyond the basic demonstration of military utility provided by the ACTD program. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the current Advanced Concept Technology Demonstration (ACTD) program, focusing on: (1) whether the selection process includes criteria that are adequate to ensure that only mature technologies are selected for ACTD prototypes; (2) whether guidance on transitioning to the normal acquisition process ensures that a prototype appropriately completes product and concept development and testing before entering production; and (3) the Department of Defense's (DOD) current practice of procuring more ACTD prototypes than needed to assess the military utility of a mature technology. GAO noted that: (1) through the determination of military value of mature technologies and their use in the acquisition process, ACTDs have the potential to reduce the time to develop and acquire weapon systems; (2) however, several aspects of the ACTD program can be improved; (3) DOD's process for selecting ACTD candidates does not include adequate criteria for assessing the maturity of the proposed technology and has resulted in the approval of ACTD projects that included immature technology; (4) DOD has improved its guidance on the maturity of the technologies to be used in ACTD projects but the revised guidance describes several types of exceptions under which immature technologies may be used; (5) where DOD approves immature technologies as ACTD program candidates and time is spent conducting developmental activities, the goal of reduced acquisition cycle time will not be realized; (6) further, guidance on entering technologies into the normal acquisition process is not sufficient to ensure that a prototype completes product and concept development and testing before entering production; (7) the guidance does not mention the circumstances when transition to development may be appropriate or the kinds of developmental activities that may be appropriate; (8) while commercial items that do not require any further development could proceed directly to production, many ACTDs may still need to enter the engineering and manufacturing development phase to proceed with product and concept development and testing before production begins; (9) through the ACTD early user demonstration, DOD is expected to obtain more detailed knowledge about its technologies before entering into the acquisition process; (10) however, in the one case in which an ACTD has proceeded into production, DOD made that decision before completing product and concept development and testing, thereby accepting programmatic risks that could offset the schedule and other benefits gained through the ACTD process; (11) DOD's current practice of procuring prototypes beyond those needed for the basic ACTD demonstration and before completing product and concept development and testing is unnecessarily risky; and (12) this practice risks wasting resources on the procurement of items that may not work as expected or may not have sufficient military utility and risks a premature and excessive commitment to production. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Despite some progress in addressing staffing shortfalls since 2006, State’s diplomatic readiness remains at risk for two reasons: persistent staffing vacancies and experience gaps at key hardship posts that are often on the forefront of U.S. policy interests. First, as of September 2008, State had a 17 percent average vacancy rate at the posts of greatest hardship (which are posts where staff receive the highest possible hardship pay). Posts in this category include such places as Peshawar, Pakistan, and Shenyang, China. This 17 percent vacancy rate was nearly double the average vacancy rate of 9 percent at posts with no hardship differentials. Second, many key hardship posts face experience gaps due to a higher rate of staff filling positions above their own grades (see table 1). As of September 2008, about 34 percent of mid-level generalist positions at posts of greatest hardship were filled by officers in such above-grade assignments—15 percentage points higher than the rate for comparable positions at posts with no or low differentials. At posts we visited during our review, we observed numerous officers working in positions above their rank. For example, in Abuja, Nigeria, more than 4 in every 10 positions were staffed by officers in assignments above grade, including several employees working in positions two grades above their own. Further, to fill positions in Iraq and Afghanistan, State has frequently assigned officers to positions above their grade. As of September 2008, over 40 percent of officers in Iraq and Afghanistan were serving in above-grade assignments. Several factors contribute to gaps at hardship posts. First, State continues to have fewer officers than positions, a shortage compounded by the personnel demands of Iraq and Afghanistan, which have resulted in staff cutting their tours short to serve in these countries. As of April 2009, State had about 1,650 vacant Foreign Service positions in total. Second, State faces a persistent mid-level staffing deficit that is exacerbated by continued low bidding on hardship posts. Third, although State’s assignment system has prioritized the staffing of hardship posts, it does not explicitly address the continuing experience gap at such posts, many of which are strategically important, yet are often staffed with less experienced officers. Staffing and experience gaps can diminish diplomatic readiness in several ways, according to State officials. For example, gaps can lead to decreased reporting coverage and loss of institutional knowledge. In addition, gaps can lead to increased supervisory requirements for senior staff, detracting from other critical diplomatic responsibilities. During the course of our review we found a number of examples of the effect of these staffing gaps on diplomatic readiness, including the following. The economic officer position in Lagos, whose responsibility is solely focused on energy, oil, and natural gas, was not filled in the 2009 cycle. The incumbent explained that, following his departure, his reporting responsibilities will be split up between officers in Abuja and Lagos. He said this division of responsibilities would diminish the position’s focus on the oil industry and potentially lead to the loss of important contacts within both the government ministries and the oil industry. An official told us that a political/military officer position in Russia was vacant because of the departure of the incumbent for a tour in Afghanistan, and the position’s portfolio of responsibilities was divided among other officers in the embassy. According to the official, this vacancy slowed negotiation of an agreement with Russia regarding military transit to Afghanistan. The consular chief in Shenyang, China, told us he spends too much time helping entry-level officers adjudicate visas and, therefore, less time managing the section. The ambassador to Nigeria told us spending time helping officers working above grade is a burden and interferes with policy planning and implementation. A 2008 OIG inspection of N’Djamena, Chad, reported that the entire front office was involved in mentoring entry-level officers, which was an unfair burden on the ambassador and deputy chief of mission, given the challenging nature of the post. State uses a range of incentives to staff hardship posts at a cost of millions of dollars a year, but their effectiveness remains unclear due to a lack of evaluation. Incentives to serve in hardship posts range from monetary benefits to changes in service and bidding requirements, such as reduced tour lengths at posts where dangerous conditions prevent some family members from accompanying officers. In a 2006 report on staffing gaps, GAO recommended that State evaluate the effectiveness of its incentive programs for hardship post assignments. In response, State added a question about hardship incentives to a recent employee survey. However, the survey does not fully meet GAO’s recommendation for several reasons, including that State did not include several incentives in the survey and did not establish specific indicators of progress against which to measure the survey responses over time. State also did not comply with a 2005 legal requirement to assess and report to Congress on the effectiveness of increasing hardship and danger pay from 25 percent to 35 percent in filling “hard to fill” positions. The lack of an assessment of the effectiveness of the danger and hardship pay increases in filling positions at these posts, coupled with the continuing staffing challenges in these locations, make it difficult to determine whether these resources are properly targeted. Recent legislation increasing Foreign Service officers’ basic pay will increase the cost of existing incentives, thereby heightening the importance that State evaluate its incentives for hardship post assignments to ensure resources are effectively targeted and not wasted. Although State plans to address staffing gaps by hiring more officers, the department acknowledges it will take years for these new employees to gain the experience they need to be effective mid-level officers. In the meantime, this experience gap will persist, since State’s staffing system does not explicitly prioritize the assignment of at-grade officers to hardship posts. Moreover, despite State’s continued difficulty attracting qualified staff to hardship posts, the department has not systematically evaluated the effectiveness of its incentives for hardship service. Without a full evaluation of State’s hardship incentives, the department cannot obtain valuable insights that could help guide resource decisions to ensure it is most efficiently and effectively addressing gaps at these important posts. State continues to have notable gaps in its foreign language capabilities, which could hinder U.S. overseas operations. As of October 31, 2008, 31 percent of officers in all worldwide language-designated positions did not meet both the foreign language speaking and reading proficiency requirements for their positions, up slightly from 29 percent in 2005. In particular, State continues to face foreign language shortfalls in areas of strategic interest—such as the Near East and South and Central Asia, where about 40 percent of officers in language-designated positions did not meet requirements. Gaps were notably high in Afghanistan, where 33 of 45 officers in language-designated positions (73 percent) did not meet the requirement, and in Iraq, with 8 of 14 officers (57 percent) lacking sufficient language skills. State has defined its need for staff proficient in some languages as “supercritical” or “critical,” based on criteria such as the difficulty of the language and the number of language-designated positions in that language, particularly at hard-to-staff posts. Shortfalls in supercritical needs languages, such as Arabic and Chinese, remain at 39 percent, despite efforts to recruit individuals with proficiency in these languages (see figure 1). In addition, more than half of the 739 Foreign Service specialists—staff who perform security, technical, and other support functions—in language-designated positions do not meet the requirements. For example, 53 percent of regional security officers do not speak and read at the level required by their positions. When a post fills a position with an officer who does not meet the requirements, it must request a language waiver for the position. In 2008, the department granted 282 such waivers, covering about 8 percent of all language- designated positions. Past reports by GAO, State’s Office of the Inspector General, the Department of Defense, and various think tanks have concluded that foreign language shortfalls could be negatively affecting U.S. national security, diplomacy, law enforcement, and intelligence-gathering efforts. Foreign Service officers we spoke to provided a number of examples of the effects of not having the required language skills, including the following. Consular officers at a post we visited said that because of a lack of language skills, they make adjudication decisions based on what they “hope” they heard in visa interviews. A security officer in Cairo said that without language skills, officers do not have any “juice”—that is, the ability to influence people they are trying to elicit information from. According to another regional security officer, the lack of foreign language skills may hinder intelligence gathering because local informants are reluctant to speak through locally hired interpreters. One ambassador we spoke to said that without language proficiency— which helps officers gain insight into a country—the officers are not invited to certain events and cannot reach out to broader, deeper audiences. A public affairs officer at another post said that the local media does not always translate embassy statements accurately, complicating efforts to communicate with audiences in the host country. For example, he said the local press translated a statement by the ambassador in a more pejorative sense than was intended, which damaged the ambassador’s reputation and took several weeks to correct. State’s current approach for meeting its foreign language proficiency requirements involves an annual review process to determine language- designated positions, training, recruitment, and incentives; however, the department faces several challenges to these efforts, particularly staffing shortages. State’s annual language designation process results in a list of positions requiring language skills. However, the views expressed by the headquarters and overseas officials we met with suggest State’s designated language proficiency requirements do not necessarily reflect the actual language needs of the posts. For example, because of budgetary and staffing issues, some overseas posts tend to request only the positions they think they will receive rather than the positions they actually need. Moreover, officers at the posts we visited questioned the validity of the relatively low proficiency level required for certain positions, citing the need for a higher proficiency level. For example, an economics officer at one of the posts we visited, who met the posts’ required proficiency level, said her level of proficiency did not provide her with language skills needed to discuss technical issues, and the officers in the public affairs section of the same post said that proficiency level was not sufficient to effectively explain U.S. positions in the local media. State primarily uses language training to meet its foreign language requirements, and does so mostly at the Foreign Service Institute in Arlington, Virginia, but also at field schools and post language training overseas. In 2008, the department reported a training success rate of 86 percent. In addition, the department recruits personnel with foreign language skills through special incentives offered under its critical needs language program and pays bonuses to encourage staff to study and maintain a level of proficiency in certain languages. The department has hired 445 officers under this program since 2004. However, various challenges limit the effectiveness of these efforts. According to State, two main challenges are overall staffing shortages, which limit the number of staff available for language training, and the recent increase in language-designated positions. The staffing shortages are exacerbated by officers curtailing their tours at posts, such as to staff the missions in Iraq and Afghanistan, which has led to a decrease in the number of officers in the language training pipeline. For example, officials in the Bureau of East Asian and Pacific Affairs told us of an officer who received nearly a year of language training in Vietnamese, yet cancelled her tour in Vietnam to serve in Iraq. These departures often force their successors to arrive at post early without having completed language training. As part of its effort to address these staffing shortfalls, in fiscal year 2009, State requested and received funding for 300 new positions to build a training capacity, intended to reduce gaps at posts while staff are in language training. State officials said that if the department’s fiscal year 2010 request for 200 additional positions is approved, the department’s language gaps will begin to close in 2011; however, State has not indicated when its foreign language staffing requirements will be completely met. Another challenge is the widely held perception among Foreign Service officers that State’s promotion system does not consider time spent in language training when evaluating officers for promotion, which may discourage officers from investing the time required to achieve proficiency in certain languages. Although State Human Resources officials dispute this perception, the department has not conducted a statistically significant assessment of the impact of language training on promotions. State’s current approach to meeting its foreign language proficiency requirements has not closed the department’s persistent language proficiency gaps and reflects, in part, a lack of a comprehensive strategic direction. Common elements of comprehensive workforce planning— described by GAO as part of a large body of work on human capital management—include setting strategic direction that includes measurable performance goals and objectives and funding priorities, determining critical skills and competencies that will be needed in the future, developing an action plan to address gaps, and monitoring and evaluating the success of the department’s progress toward meeting goals. In the past, State officials have asserted that because language is such an integral part of the department’s operations, a separate planning effort for foreign language skills was not needed. More recently, State officials have said the department’s plan for meeting its foreign language requirements is spread throughout a number of documents that address these requirements, including the department’s Five-Year Workforce Plan. However, these documents are not linked to each other and do not contain measurable goals, objectives, resource requirements, and milestones for reducing the foreign language gaps. We believe that a more comprehensive strategic approach would help State to more effectively guide and assess progress in meeting its foreign language requirements. In our recently-issued reports we made several recommendations to help State address its staffing gaps and language proficiency shortfalls. To ensure that hardship posts are staffed commensurate with their stated level of strategic importance and resources are properly targeted, GAO recommends the Secretary of State (1) take steps to minimize the experience gap at hardship posts by making the assignment of experienced officers to such posts an explicit priority consideration, and (2) develop and implement a plan to evaluate incentives for hardship post assignments. To address State’s long-standing foreign language proficiency shortfalls, we recommend that the Secretary of State develop a comprehensive strategic plan with measurable goals, objectives, milestones, and feedback mechanisms that links all of State’s efforts to meet its foreign language requirements. State generally agreed with our findings, conclusions, and recommendations and described several initiatives that address elements of the recommendations. In addition, State recently convened an inter- bureau language working group, which will focus on and develop an action plan to address GAO’s recommendations. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions you or other Members of the Subcommittee may have at this time. For questions regarding this testimony, please contact Jess T. Ford at (202) 512-4268 or [email protected]. Individuals making key contributions to this statement include Godwin Agbara and Anthony Moran, Assistant Directors; Robert Ball; Joseph Carney; Aniruddha Dasgupta; Martin de Alteriis; Brian Hackney; Gloria Hernandez-Saunders; Richard Gifford Howland; Grace Lui; and La Verne Tharpes. Department of State: Comprehensive Plan Needed to Address Persistent Foreign Language Shortfalls. GAO-09-955. Washington, D.C.: September 17, 2009. Department of State: Additional Steps Needed to Address Continuing Staffing and Experience Gaps at Hardship Posts. GAO-09-874. Washington, D.C.: September 17, 2009. State Department: Staffing and Foreign Language Shortfalls Persist Despite Initiatives to Address Gaps. GAO-07-1154T. Washington, D.C.: August 1, 2007. U.S. Public Diplomacy: Strategic Planning Efforts Have Improved, but Agencies Face Significant Implementation Challenges. GAO-07-795T. Washington, D.C.: April 26, 2007. Department of State: Staffing and Foreign Language Shortfalls Persist Despite Initiatives to Address Gaps. GAO-06-894. Washington, D.C.: August 4, 2006. Overseas Staffing: Rightsizing Approaches Slowly Taking Hold but More Action Needed to Coordinate and Carry Out Efforts. GAO-06-737. Washington, D.C.: June 30, 2006. U.S. Public Diplomacy: State Department Efforts to Engage Muslim Audiences Lack Certain Communication Elements and Face Significant Challenges. GAO-06-535. Washington, D.C.: May 3, 2006. Border Security: Strengthened Visa Process Would Benefit from Improvements in Staffing and Information Sharing. GAO-05-859. Washington, D.C.: September 13, 2005. State Department: Targets for Hiring, Filling Vacancies Overseas Being Met, but Gaps Remain in Hard-to-Learn Languages. GAO-04-139. Washington, D.C.: November 19, 2003. Foreign Affairs: Effective Stewardship of Resources Essential to Efficient Operations at State Department, USAID. GAO-03-1009T. Washington, D.C.: September 4, 2003. State Department: Staffing Shortfalls and Ineffective Assignment System Compromise Diplomatic Readiness at Hardship Posts. GAO-02-626. Washington, D.C.: June 18, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. The Government Accountability Office, the audit, evaluation, and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday afternoon, GAO posts on its Web site newly released reports, testimony, and correspondence. To have GAO e-mail you a list of newly posted products, go to www.gao.gov and select “E-mail Updates.” The price of each GAO publication reflects GAO’s actual cost of production and distribution and depends on the number of pages in the publication and whether the publication is printed in color or black and white. Pricing and ordering information is posted on GAO’s Web site, http://www.gao.gov/ordering.htm. Place orders by calling (202) 512-6000, toll free (866) 801-7077, or TDD (202) 512-2537. Orders may be paid for using American Express, Discover Card, MasterCard, Visa, check, or money order. Call for additional information. | This testimony discusses U.S. diplomatic readiness, and in particular the staffing and foreign language challenges facing the Foreign Service. The Department of State (State) faces an ongoing challenge of ensuring it has the right people, with the right skills, in the right places overseas to carry out the department's priorities. In particular, State has long had difficulty staffing its hardship posts overseas, which are places like Beruit and Lagos, where conditions are difficult and sometimes dangerous due to harsh environmental and extreme living conditions that often entail pervasive crime or war, but are nonetheless integral to foreign policy priorities and need a full complement of qualified staff. State has also faced persistent shortages of staff with critical language skills, despite the importance of foreign language proficiency in advancing U.S. foreign policy and economic interests overseas. In recent years GAO has issued a number of reports on human capital issues that have hampered State's ability to carry out the President's foreign policy objectives. This testimony discusses (1) State's progress in addressing staffing gaps at hardship posts, and (2) State's efforts to meet its foreign language requirements. Despite a number of steps taken over a number of years, the State Department continues to face persistent staffing and experience gaps at hardship posts, as well as notable shortfalls in foreign language capabilities. A common element of these problems has been a longstanding staffing and experience deficit, which has both contributed to the gaps at hardship posts and fueled the language shortfall by limiting the number of staff available for language training. State has undertaken several initiatives to address these shortages, including multiple staffing increases intended to fill the gaps. However, the department has not undertaken these initiatives in a comprehensive and strategic manner. As a result, it is unclear when the staffing and skill gaps that put diplomatic readiness at risk will close. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
DOD and NASA build costly, complex systems that serve a variety of national security and science, technology, and space exploration missions. Within DOD, the Air Force’s Space and Missile Systems Center is responsible for acquiring most of DOD’s space systems; however, the Navy is also acquiring a replacement satellite communication system. MDA, also within DOD, is responsible for developing, testing, and fielding an integrated, layered ballistic missile defense system (BMDS) to defend against all ranges of enemy ballistic missiles in all phases of flight. The major projects that NASA undertakes range from highly complex and sophisticated space transportation vehicles, to robotic probes, to satellites equipped with advanced sensors to study the Earth. Requirements for government space systems can be more demanding than those of the commercial satellite and consumer electronics industry. For instance, DOD typically has more demanding standards for radiation-hardened parts, such as microelectronics, which are designed and fabricated with the specific goal of enduring the harshest space radiation environments, including nuclear events. Companies typically need to create separate production lines and in some cases special facilities. In the overall electronics market, military and NASA business is considered a niche market. Moreover, over time, government space and missile systems have increased in complexity, partly as a result of advances in commercially driven electronics technology and subsequent obsolescence of mature high-reliability parts. Systems are using more and increasingly complex parts, requiring more stringent design verification and qualification practices. In addition, acquiring qualified parts from a limited supplier base has become more difficult as suppliers focus on commercial markets at the expense of the government space market—which requires stricter controls and proven reliability. Further, because DOD and NASA’s space systems cannot usually be repaired once they are deployed, an exacting attention to parts quality is required to ensure that they can operate continuously and reliably for years at a time through the harsh environmental conditions of space. Similarly, ballistic missiles that travel through space after their boost phase to reach their intended targets are important for national security and also require reliable and dependable parts. These requirements drive designs that depend on reliable parts, materials and processes that have passed CDRs, been fully tested, and demonstrated long life and tolerance to the harsh environmental conditions of space. There have been dramatic shifts in how parts for space and missile defense systems have been acquired and overseen. For about three decades, until the 1990s, government space and missile development based its quality requirements on a military standard known as MIL-Q- 9858A. This standard required contractors to establish a quality program with documented procedures and processes that are subject to approval by government representatives throughout all areas of contract performance. Quality is theoretically ensured by requiring both the contractor and the government to monitor and inspect products. MIL-Q- 9858A and other standards—collectively known as military specifications—were used by DOD and NASA to specify the manufacturing processes, materials, and testing needed to ensure that parts would meet quality and reliability standards needed to perform in and through space. In the 1990s, concerns about cost and the need to introduce more innovation brought about acquisition reform efforts that loosened a complex and often rigid acquisition process and shifted key decision-making responsibility—including management and oversight for parts, materials, and processes—to contractors. This period, however, was marked by continued problematic acquisitions that ultimately resulted in sharp increases in cost, schedule, and quality problems. For DOD, acquisition reform for space systems was referred to as Total System Performance Responsibility (TSPR). Under TSPR, program managers’ oversight was reduced and key decision-making responsibilities were shifted onto the contractor. In May 2003, a report of the Defense Science Board/Air Force Scientific Advisory Board Joint Task Force stated that the TSPR policy marginalized the government program management role and replaced traditional government “oversight” with “insight.” In 2006, a retired senior official responsible for testing in DOD stated that “TSPR relieved development contractors of many reporting requirements, including cost and technical progress, and built a firewall around the contractor, preventing government sponsors from properly overseeing expenditure of taxpayer dollars.” We found that TSPR reduced government oversight and led to major reductions in various government capabilities, including cost-estimating and systems-engineering staff. MDA chose to pursue the Lead Systems Integrator (LSI) approach as part of its acquisition reform effort. The LSI approach used a single contractor responsible for developing and integrating a system of systems within a given budget and schedule. We found in 2007 that a proposal to use an LSI approach on any new program should be seen as a risk at the outset, not because it is conceptually flawed, but because it indicates that the government may be pursuing a solution that it does not have the capacity to manage. Within NASA, a similar approach called “faster, better, cheaper” was intended to help reduce mission costs, improve efficiency, and increase scientific results by conducting more and smaller missions in less time. The approach was intended to stimulate innovative development and application of technology, streamline policies and practices, and energize and challenge a workforce to successfully undertake new missions in an era of diminishing resources. We found that while NASA had many successes, failures of two Mars probes revealed limits to this approach, particularly in terms of NASA’s ability to learn from past mistakes. As DOD and NASA moved from military specifications and standards, so did suppliers. According to an Aerospace Corporation study, both prime contractors and the government space market lost insight and traceability into parts as suppliers moved from having to meet military specifications and standards to an environment where the prime contractor would ensure that the process used by the supplier would yield a quality part. During this time, downsizing and tight budgets also eroded core skills, giving the government less insight, with fewer people to track problems and less oversight into manufacturing details. As DOD and NASA experienced considerable cost, schedule, and performance problems with major systems in the late 1990s and early 2000s, independent government-sponsored reviews concluded that the government ceded too much control to contractors during acquisition reform. As a result, in the mid-to late 2000s, DOD and NASA reached broad consensus that the government needed to return to a lifecycle mission assurance approach aimed at ensuring mission success. For example, MDA issued its Mission Assurance Provisions (MAP) for acquisition of mission and safety critical hardware and software in October 2006. The MAP is to assist in improving MDA’s acquisition activities through the effective application of critical best practices for quality safety and mission assurance. In December 2008, DOD updated its acquisition process which includes government involvement in the full range of requirements, design, manufacture, test, operations, and readiness reviews. Also in the last decade, DOD and NASA have developed policies and procedures aimed at preventing parts quality problems. For example, policies at each agency set standards to require the contractor to establish control plans related to parts, materials, and processes. Policies at the Air Force, MDA, and the NASA component we reviewed also establish minimum quality and reliability requirements for electronic parts—such as capacitors, resistors, connectors, fuses, and filters—and set standards to require the contractor to select materials and processes to ensure that the parts will perform as intended in the environment where they will function, considering the effects of, for example, static electricity, extreme temperature fluctuations, solar radiation, and corrosion. In addition, DOD and NASA have developed plans and policies related to counterfeit parts control that set standards to require contractors to take certain steps to prevent and detect counterfeit parts and materials. Table 1 identifies the major policies related to parts quality at DOD and NASA. Government policies generally require various activities related to the selection and testing of parts, materials, and processes. It is the prime contractor’s responsibility to determine how the requirements will be managed and implemented, including the selection and management of subcontractors and suppliers. In addition, it is the government’s responsibility to provide sufficient oversight to ensure that parts quality controls and procedures are in place and rigorously followed. Finally, DOD and NASA have quality and mission assurance personnel staff on their programs to conduct on-site audits at contractor facilities. Table 2 illustrates the typical roles of the government and the prime contractor in ensuring parts quality. DOD and NASA also have their own oversight activities that contribute to system quality. DOD has on-site quality specialists within the Defense Contract Management Agency and the military services, MDA has its Mission Assurance program, and NASA has its Quality Assurance program. Each activity aims to identify quality problems and ensure the on-time, on- cost delivery of quality products to the government through oversight of manufacturing and through supplier management activities, selected manufacturing activities, and final product inspections prior to acceptance. Likewise, prime contractors employ quality assurance specialists and engineers to assess the quality and reliability of both the parts they receive from suppliers and the overall weapon system. In addition, DOD and NASA have access to one or more of the following databases used to report deficient parts: the Product Data Reporting and Evaluation Program (PDREP), the Joint Deficiency Reporting System (JDRS), and the Government Industry Data Exchange Program (GIDEP). Through these systems, the government and industry participants share information on deficient parts. Parts quality problems reported by each program affected all 21 programs we reviewed at DOD and NASA and in some cases contributed to significant cost overruns, schedule delays, and reduced system reliability and availability. In most cases, problems were associated with electronics parts, versus mechanical parts or materials. Moreover, in several cases, parts problems were discovered late in the development cycle and, as such, tended to have more significant cost and schedule consequences. Table 3 identifies the cost and schedule effects of parts quality problems for the 21 programs we reviewed. The costs in this table are the cumulative costs of all the parts quality problems that the programs identified as most significant as of August 2010 and do not necessarily reflect cost increases to the program’s total costs. In some cases, program officials told us that they do not track the cost effects of parts quality problems or that it was too early to determine the effect. The schedule effect is the cumulative total of months it took to resolve a problem. Unless the problems affected a schedule milestone such as launch date, the total number of months may reflect problems that were concurrent and may not necessarily reflect delays to the program’s schedule. The programs we reviewed are primarily experiencing quality problems with electronic parts that are associated with electronic assemblies, such as computers, communication systems, and guidance systems, critical to the system operations. Based on our review of 21 programs, 64.7 percent of the parts quality problems were associated with electronic parts, 14.7 percent with mechanical parts, and 20.6 percent with materials used in manufacturing. In many cases, programs experienced problems with the same parts and materials. Figure 3 identifies the distribution of quality problems across electronic parts, mechanical parts, and materials. In many cases, programs experienced problems with the same parts and materials. For electronic parts, seven programs reported problems with capacitors, a part that is widely used in electronic circuits. Multiple programs also reported problems with printed circuit boards, which are used to support and connect electronic components. While printed circuit boards range in complexity and capability, they are used in virtually all but the simplest electronic devices. As with problems with electronic parts, multiple programs also experienced problems with the same materials. For example, five programs reported problems with titanium that did not meet requirements. In addition, two programs reported problems with four different parts manufactured with pure tin, a material that is prohibited in space because it poses a reliability risk to electronics. Figure 4 identifies examples of quality problems with parts and materials that affected three or more programs. While parts quality problems affected all of the programs we reviewed, problems found late in development—during final integration and testing at the instrument and system level—had the most significant effect on program cost and schedule. As shown in figure 5, part screening, qualification, and testing typically occur during the final design phase of spacecraft development. When parts problems are discovered during this phase, they are sometimes more easily addressed without major consequences to a development effort since fabrication of the spacecraft has not yet begun or is just in the initial phases. In several of the cases we reviewed, however, parts problems were discovered during instrument and system-level testing, that is, after assembly or integration of the instrument or spacecraft. As such, they had more significant consequences as they required lengthy failure analysis, disassembly, rework, and reassembly, sometimes resulting in a launch delay. Our work identified a number of cases in which parts problems identified late in development caused significant cost and schedule issues. Parts quality problems found during system-level testing of the Air Force’s Advanced Extremely High Frequency satellite program contributed to a launch delay of almost 2 years and cost the program at least $250 million. A power-regulating unit failed during system-level thermal vacuum testing because of defective electronic parts that had to be removed and replaced. This and other problems resulted in extensive rework and required the satellite to undergo another round of thermal vacuum testing. According to the program office, the additional thermal vacuum testing alone cost about $250 million. At MDA, the Space Tracking and Surveillance System program discovered problems with defective electronic parts in the Space- Ground Link Subsystem during system-level testing and integration of the satellite. By the time the problem was discovered, the manufacturer no longer produced the part and an alternate contractor had to be found to manufacture and test replacement parts. According to officials, the problem cost about $7 million and was one of the factors that contributed to a 17-month launch delay of two demonstration satellites and delayed participation in the BMDS testing we reported on in March 2009. At NASA, parts quality problems found late in development resulted in a 20-month launch delay for the Glory program and cost $71.1 million. In August 2008, Glory’s spacecraft computer failed to power up during system-level testing. After a 6-month failure analysis, the problem was attributed to a crack in the computer’s printed circuit board, an electronic part in the computer used to connect electronic components. Because the printed circuit board could not be manufactured reliably, the program had to procure and test an alternate computer. The program minimized the long lead times expected with the alternate computer by obtaining one that had already been procured by NASA. However, according to contractor officials, design changes were also required to accommodate the alternate computer. In June 2010, after the computer problem had been resolved, the Glory program also discovered problems with parts for the solar array drive assembly that rendered one of the arrays unacceptable for flight and resulted in an additional 3-month launch delay. Also at NASA, the National Polar-orbiting Operational Environmental Satellite System Preparatory Project experienced $105 million in cost increases and 27 months of delay because of parts quality problems. In one case, a key instrument developed by a NASA partner failed during instrument-level testing because the instrument frame fractured at several locations. According to the failure review board, stresses exceeded the material capabilities of several brazed joints—a method of joining metal parts together. According to officials, the instrument’s frame had to be reinforced, which delayed instrument delivery and ultimately delayed the satellite’s launch date. In addition, officials stated that they lack confidence in how the partner-provided satellite instruments will function on orbit because of the systemic mission assurance and systems engineering issues that contributed to the parts quality problems. For some of the programs we reviewed, the costs associated with parts quality problems were minimized because the problems were found early and were resolved within the existing margins built into the program schedule. For example, the Air Force’s Global Positioning System (GPS) program discovered problems with electronic parts during part-level testing and inspection. An investigation into the problem cost about $50,000, but did not result in delivery delays. An independent review team ultimately concluded that the parts could be used without a performance or mission impact. At NASA, the Juno program discovered during part- level qualification testing that an electronic part did not meet performance requirements. The program obtained a suitable replacement from another manufacturer; it cost the program $10,000 to resolve the issue with no impact on program schedule. In other cases, the costs of parts quality problems were amplified because they were a leading cause of a schedule delay to a major milestone, such as launch readiness. For example, of the $60.9 million cost associated with problems with the Glory spacecraft computer found during system-level testing, $11.6 million was spent to resolve the issue, including personnel costs for troubleshooting, testing, and oversight as well as design, fabrication, and testing of the new computer. The majority of the cost— $49.3 million—was associated with maintaining the contractor during the 15-month launch delay. Similarly, problems with parts for Glory’s solar array assembly cost about $10.1 million, $2.7 million to resolve the problem and $7.4 million resulting from the additional 3-month schedule delay. Similarly, program officials for NASA’s National Polar-orbiting Environmental Satellite System Preparatory Project attributed the $105 million cost of its parts quality problems to the costs associated with launch and schedule delays, an estimated $5 million a month. In several cases, the programs were encountering other challenges that obscured the problems caused by poor quality parts. For example, the Air Force’s Space-Based Infrared System High program reported that a part with pure tin in the satellite telemetry unit was discovered after the satellite was integrated. After an 11-month failure review board, the defective part was replaced. The program did not quantify the cost and schedule effect of the problem because the program was encountering software development issues that were already resulting in schedule delays. Similarly, NASA’s Mars Science Laboratory program experienced a failure associated with joints in the rover propulsion system. According to officials, the welding process led to joint embrittlement and the possibility of early failure. The project had to test a new process, rebuild, and test the system, which cost about $4 million and resulted in a 1-year delay in completion. However, the program’s launch date had already been delayed 25 months because of design issues with the rover actuator motors and avionics package—in effect, buying time to resolve the problem with the propulsion system. In addition to the launch delays discussed above, parts quality problems also resulted in reduced system reliability and availability for several other programs we reviewed. For example, the Air Force’s GPS program found that an electronic part lacked qualification data to prove the part’s quality and reliability. As a result, the overall reliability prediction for the space vehicle was decreased. At MDA, the Ground-Based Midcourse Defense program discovered problems with an electronic part in the telemetry unit needed to transmit flight test data. The problem was found during final assembly and test operations of the Exoatmospheric Kill Vehicle resulting in the cancellation of a major flight test. This increased risk to the program and the overall BMDS capability, since the lack of adequate intercept data reduced confidence that the system could perform as intended in a real- world situation. Also, MDA’s Aegis Ballistic Missile Defense program recalled 16 missiles from the warfighter, including 7 from a foreign partner, after the prime contractor discovered that the brackets used to accommodate communications and power cabling were improperly adhered to the Standard Missile 3 rocket motor. If not corrected, the problem could have resulted in catastrophic mission failure. Regardless of the cause of the parts quality problem, the government typically bears the costs associated with resolving the issues and associated schedule impact. In part, this is due to the use of cost- reimbursement contracts. Because space and missile defense acquisitions are complex and technically challenging, DOD and NASA typically use cost-reimbursement contracts, whereby the government pays the prime contractor’s allowable costs to the extent prescribed in the contract for the contractor’s best efforts. Under cost-reimbursement contracts, the government generally assumes the financial risks associated with development, which may include the costs associated with parts quality problems. Of the 21 programs we reviewed, 20 use cost-reimbursement contracts. In addition, 17 programs use award and incentive fees to reduce the government’s risk and provide an incentive for excellence in such areas as quality, timeliness, technical ingenuity, and cost-effective management. Award and incentive fees enable the reduction of fee in the event that the contractor’s performance does not meet or exceed the requirements of the contract. Aside from the use of award fees, senior quality and acquisition oversight officials told us that incentives for prime contractors to ensure quality are limited. The parts quality problems we identified were directly attributed to poor control of manufacturing processes and materials, poor design, and lack of effective supplier management. Generally, prime contractor activities to capture manufacturing knowledge should include identifying critical characteristics of the product’s design and then the critical manufacturing processes and materials to achieve these characteristics. Manufacturing processes and materials should be documented, tested, and controlled prior to production. This includes establishing criteria for workmanship, making work instructions available, and preventing and removing foreign object debris in the production process. Poor workmanship was one of the causes of problems with electronic parts. At DOD, poor workmanship during hand-soldering operations caused a capacitor to fail during testing on the Navy’s Mobile User Objective System program. Poor soldering workmanship also caused a power distribution unit to experience problems during vehicle-level testing on MDA’s Targets and Countermeasures program. According to MDA officials, all units of the same design by the same manufacturer had to be X-ray inspected and reworked, involving extensive hardware disassembly. As a corrective action, soldering technicians were provided with training to improve their soldering operations and ability to perform better visual inspections after soldering. Soldering workmanship problems also contributed to a capacitor failure on NASA’s Glory program. Analysis determined that the manufacturer’s soldering guidelines were not followed. Programs also reported quality problems because of the use of undocumented and untested manufacturing processes. For example, MDA’s Aegis Ballistic Missile Defense program reported that the brackets used to accommodate communications and power cabling were improperly bonded to Standard Missile 3 rocket motors, potentially leading to mission failure. A failure review board determined that the subcontractor had changed the bonding process to reduce high scrap rates and that the new process was not tested and verified before it was implemented. Similarly, NASA’s Landsat Data Continuity Mission program experienced problems with the spacecraft solar array because of an undocumented manufacturing process. According to program officials, the subcontractor did not have a documented process to control the amount of adhesive used in manufacturing, and as a result, too much adhesive was applied. If not corrected, the problem could have resulted in solar array failure on orbit. Poor control of manufacturing materials and the failure to prevent contamination also caused quality problems. At MDA, the Ground-Based Midcourse Defense program reported a problem with defective titanium tubing. The defective tubing was rejected in 2004 and was to be returned to the supplier; however, because of poor control of manufacturing materials, a portion of the material was not returned and was inadvertently used to fabricate manifolds for two complete Ground-Based Interceptor Exoatmospheric Kill Vehicles. The vehicles had already been processed and delivered to the prime contractor for integration when the problem was discovered. Lack of adherence to manufacturing controls to prevent contamination and foreign object debris also caused parts quality problems. For example, at NASA, a titanium propulsion tank for the Tracking Data and Relay Satellite program failed acceptance testing because a steel chip was inadvertently welded onto the tank. Following a 3-month investigation into the root cause, the tank was scrapped and a replacement tank was built. In addition to problems stemming from poor control of manufacturing processes and materials, many problems resulted from poor part design, design complexity, and inattention to manufacturing risks. For example, attenuators for the Navy’s Mobile User Objective System exhibited inconsistent performance because of their sensitivity to temperature changes. Officials attributed the problem to poor design, and the attenuators were subsequently redesigned. At NASA, design problems also affected parts for the Mars Science Laboratory program. According to program officials, several resistors failed after assembly into printed circuit boards. A failure review board determined that the tight design limits contributed to the problem. Consequently, the parts had to be redesigned and replaced. Programs also underestimated the complexity of parts design, which created risks of latent design and workmanship defects. For example, NASA’s Glory project experienced problems with the state-of-the-art printed circuit board for the spacecraft computer. According to project officials, the board design was almost impossible to manufacture with over 100 serial steps involved in the manufacturing process. Furthermore, failure analysis found that the 27,000 connection points in the printed circuit board were vulnerable to thermal stresses over time leading to intermittent failures. However, the quality of those interconnections was difficult to detect through standard testing protocols. This is inconsistent with commercial best practices, which focus on simplified design characteristics as well as use of mature and validated technology and manufacturing processes. Program officials at each agency also attributed parts quality problems to the prime contractor’s failure to ensure that its subcontractors and suppliers met program requirements. According to officials, in several cases, prime contractors were responsible for flowing down all applicable program requirements to their subcontractors and suppliers. Requirements flow-down from the prime contractor to subcontractors and suppliers is particularly important and challenging given the structure of the space and defense industries, wherein prime contractors are subcontracting more work to subcontractors. At MDA, the Ground-Based Midcourse Defense program experienced a failure with an electronics part purchased from an unauthorized supplier. According to program officials, the prime contractor flowed down the requirement that parts only be purchased from authorized suppliers; however, the subcontractor failed to execute the requirement and the prime contractor did not verify compliance. Program officials for NASA’s Juno program attributed problems with a capacitor to the supplier’s failure to review the specification prohibiting the use of pure tin. DOD’s Space-Based Infrared System High program reported problems with three different parts containing pure tin and attributed the problems to poor requirements flow-down and poor supplier management. Figure 6 shows an example of tin whiskers on a capacitor, which can cause catastrophic problems to space systems. DOD and NASA have instituted new policies to prevent and detect parts quality problems, but most of the programs we reviewed were initiated before these policies took effect. Moreover, newer programs that do come under the policies have not reached the phases of development where parts problems are typically discovered. In addition, agencies and industry have been collaborating to share information about potential problems, collecting data, and developing guidance and criteria for activities such as testing parts, managing subcontractors, and mitigating specific types of problems. We could not determine the extent to which collaborative actions have resulted in reduced instances of parts quality problems or ensured that they are caught earlier in the development cycle. This is primarily because data on the condition of parts quality in the space and missile community governmentwide historically have not been collected. And while there are new efforts to collect data on anomalies, there is no mechanism to use these data to help assess the effectiveness of improvement actions. Lastly, there are significant potential barriers to success of efforts to address parts quality problems. They include broader acquisition management problems, workforce gaps, diffuse leadership in the national security space community, the government’s decreasing influence on the overall electronic parts market, and an increase in counterfeiting of electronic parts. In the face of such challenges, it is likely that ongoing improvements will have limited success without continued assessments to determine what is working well and what more needs to be done. As noted earlier in this report, the Air Force, MDA, and NASA have all recently instituted or updated existing policies to prevent and detect parts quality problems. At the Air Force and MDA, all of the programs we reviewed were initiated before these recent policies aimed at preventing and detecting parts quality problems took full effect. In addition, it is too early to tell whether newer programs—such as a new Air Force GPS development effort and the MDA’s Precision Tracking Space System—are benefiting from the newer policies because these programs have not reached the design and fabrication phases where parts problems are typically discovered. However, we have reported that the Air Force is taking measures to prevent the problems experienced on the GPS IIF program from recurring on the new GPS III program. The Air Force has increased government oversight of its GPS III development and Air Force officials are spending more time at the contractor’s site to ensure quality. The Air Force is also following military standards for satellite quality for GPS III development. At the time of our review, the program had not reported a significant parts quality problem. Table 4 highlights the major differences in the framework between the GPS IIF and GPS III programs. In addition to new policies focused on quality, agencies are also becoming more focused on industrial base issues and supply chain risks. For example, MDA has developed the supplier road map database in an effort to gain greater visibility into the supply chain in order to more effectively manage supply chain risks. In addition, according to MDA officials, MDA has recently been auditing parts distributors in order to rank them for risk in terms of counterfeit parts. NASA has begun to assess industrial base risks and challenges during acquisition strategy meetings and has established an agency Supply Chain Management Team to focus attention on supply chain management issues and to coordinate with other government agencies. Agencies and industry also participate in a variety of collaborative initiatives to address quality, in particular, parts quality. These range from informal groups focused on identifying and sharing news about emerging problems as quickly as possible, to partnerships that conduct supplier assessments, to formal groups focused on identifying ways industry and the government can work together to prevent and mitigate problems. As shown in table 5, these groups have worked to establish guidance, criteria, and standards that focus on parts quality issues, and they have enhanced existing data collection tools and created new databases focused on assessing anomalies. One example of the collaborative efforts is the Space Industrial Base Council (SIBC)—a government-led initiative—which brings together officials from agencies involved in space and missile defense to focus on a range of issues affecting the space industrial base and has sparked numerous working groups focused specifically on parts quality and critical suppliers. These groups in turn have worked to develop information- sharing mechanisms, share lessons learned and conduct supplier assessments, soliciting industry’s input as appropriate. For instance, the SIBC established a critical technology working group to explore supply chains and examine critical technologies to put in place a process for strategic management of critical space systems’ technologies and capabilities under the Secretary of the Air Force and the Director of the National Reconnaissance Office. The working group has developed and initiated a mitigation plan for batteries, solar cells and arrays, and traveling wave tube amplifiers. In addition, the Space Supplier Council was established under the SIBC to focus on the concerns of second-tier and lower-tier suppliers, which typically have to go through the prime contractors, and to promote more dialogue between DOD, MDA, NASA, other space entities, and these suppliers. Another council initiative was the creation of the National Security Space Advisory Forum, a Web-based alert system developed for sharing critical space system anomaly data and problem alerts, which became operational in 2005. Agency officials also cited other informal channels used to share information regarding parts issues. For example, NASA officials stated that after verifying a parts issue, they will share their internal advisory notice with any other government space program that could potentially be affected by the issue. According to several government and contractor officials, the main reasons for delays in information sharing were either the time it took to confirm a problem or concerns with proprietary and liability issues. NASA officials stated that they received advisories from MDA and had an informal network with MDA and the Army Space and Missile Defense Command to share information about parts problems. Officials at the Space and Missile Systems Center also mentioned that they have informal channels for sharing part issues. For example, an official in the systems engineering division at the Space and Missile Systems Center stated that he has weekly meetings with a NASA official to discuss parts issues. In addition to the formal and informal collaborative efforts, the Air Force’s Space and Missile Systems Center, MDA, NASA, and the National Reconnaissance Office signed a memorandum of understanding (MOU) in February 2011 to encourage additional interagency cooperation in order to strengthen mission assurance practices. The MOU calls on the agencies to develop and share lessons learned and best practices to ensure mission success through a framework of collaborative mission assurance. Broad objectives of the framework are to develop core mission assurance practices and tools; to foster a mission assurance culture and world-class workforce; to develop clear and executable mission assurance plans; to manage effective program execution; and to ensure program health through independent, objective assessments. Specific objectives include developing a robust mission assurance infrastructure and guidelines for tailoring specifications and standards for parts, materials, and processes and establishing standard contractual language to ensure consistent specification of core standards and deliverables. In addition, each agency is asked to consider the health of the industrial base in space systems acquisitions and participate in mission assurance activities, such as the Space Supplier Council and mission assurance summits. In signing the MOU, DOD, MDA, NASA, and the National Reconnaissance Office acknowledged the complexity of such an undertaking as it typically takes years to deliver a capability and involves hundreds of industry partners building, integrating, and testing hundreds of thousands of parts, all which have to work the first time on orbit—a single mishap, undetected, can and has had catastrophic results. Although collaborative efforts are under way, we could not determine the extent to which collaborative actions have resulted in reduced instances of parts quality problems to date or ensured that they are caught earlier in the development cycle. This is primarily because data on the condition of parts quality in the space and missile community governmentwide historically have not been collected. The Aerospace Corporation has begun to collect data on on-orbit and preflight anomalies in addition to the Web alert system established by the Space Quality Improvement Council. In addition, there is no mechanism in place to assess the progress of improvement actions using these data or to track the condition of parts quality problems across the space and missile defense sector to determine if improvements are working or what additional actions need to be taken. Such a mechanism is needed given the varied challenges facing improvement efforts. There are significant potential barriers to the success of improvement efforts, including broader acquisition management problems, diffuse leadership in the national security space community, workforce gaps, the government’s decreasing influence on the overall electronic parts market, and an increase in counterfeiting of electronic parts. Actions are being taken to address some of these barriers, such as acquisition management and diffuse leadership, but others reflect trends affecting the aerospace industry that are unlikely to change in the near future and may limit the extent to which parts problems can be prevented. Broader acquisition management problems: Both space and missile defense programs have experienced acquisition problems—well beyond parts quality management difficulties—during the past two decades that have driven up costs by billions of dollars, stretched schedules by years, and increased technical risks. These problems have resulted in potential capability gaps in areas such as missile warning, military communications, and weather monitoring, and have required all the agencies in our review to cancel or pare back major programs. Our reports have generally found that these problems include starting efforts before requirements and technologies have been fully understood and moving them forward into more complex phases of development without sufficient knowledge about technology, design, and other issues. Reduced oversight resulting from earlier acquisition reform efforts and funding instability have also contributed to cost growth and schedule delays. Agencies are attempting to address these broader challenges as they are concurrently addressing parts quality problems. For space in particular, DOD is working to ensure that critical technologies are matured before large-scale acquisition programs begin, requirements are defined early in the process and are stable throughout, and system designs remain stable. In response to our designation of NASA acquisition management as a high-risk area, NASA developed a corrective action plan to improve the effectiveness of its program/project management, and it is in the process of implementing earned value management within certain programs to help projects monitor the scheduled work done by NASA contractors and employees. These and other actions have the potential to strengthen the foundation for program and quality management but they are relatively new and implementation is uneven among the agencies involved with space and missile defense. For instance, we have found that both NASA and MDA lack adequate visibility into costs of programs. Our reports also continue to find that cost and schedule estimates across all three agencies tend to be optimistic. Diffuse leadership within the national security space community: We have previously testified and reported that diffuse leadership within the national security space community has a direct impact on the space acquisition process, primarily because it makes it difficult to hold any one person or organization accountable for balancing needs against wants, for resolving conflicts among the many organizations involved with space, and for ensuring that resources are dedicated where they need to be dedicated. In 2008, a congressionally chartered commission (known as the Allard Commission) reported that responsibilities for military space and intelligence programs were scattered across the staffs of DOD organizations and the intelligence community and that it appeared that “no one is in charge” of national security space. The same year, the House Permanent Select Committee on Intelligence reported similar concerns, focusing specifically on difficulties in bringing together decisions that would involve both the Director of National Intelligence and the Secretary of Defense. Prior studies, including those conducted by the Defense Science Board and the Commission to Assess United States National Security Space Management and Organization (Space Commission), have identified similar problems, both for space as a whole and for specific programs. Changes have been made this past year to national space policies as well as organizational and reporting structures within the Office of the Secretary of Defense and the Air Force to address these concerns and clarify responsibilities, but it remains to be seen whether these changes will resolve problems associated with diffuse leadership. Workforce gaps: Another potential barrier to success is a decline in the number of quality assurance officials, which officials we spoke with pointed to as a significant detriment. A senior quality official at MDA stated that the quality assurance workforce was significantly reduced as a result of acquisition reform. A senior DOD official responsible for space acquisition oversight agreed, adding that the government does not have the in-house knowledge or resources to adequately conduct many quality control and quality assurance tasks. NASA officials also noted the loss of parts specialists who provide technical expertise to improve specifications and review change requests. According to NASA officials, there is now a shortage of qualified personnel with the requisite cross-disciplinary knowledge to assess parts quality and reliability. Our prior work has also shown that DOD’s Defense Contract Management Agency (DCMA), which provides quality assurance oversight for many space acquisitions, was downsized considerably during the 1990s. While capacity shortfalls still exist, DCMA has implemented a strategic plan to address workforce issues and improve quality assurance oversight. The shortage in the government quality assurance workforce reflects a broader decline in the numbers of scientists and engineers in the space sector. The 2008 House Permanent Select Committee on Intelligence report mentioned above found that the space workforce is facing a significant loss of talent and expertise because of pending retirements, which is causing problems in smoothly transitioning to a new space workforce. Similarly, in 2010 we reported that 30 percent of the civilian manufacturing workforce was eligible for retirement, and approximately 26 percent will become eligible for retirement over the next 4 years. Similar findings were reported by the DOD Cost Analysis Improvement Group in 2009. Industrial base consolidation: A series of mergers and consolidations that took place primarily in the 1990s added risks to parts quality—first, by shrinking the pool of suppliers available to produce specialty parts; second, by reducing specialized expertise within prime contractors; and third, by introducing cost-cutting measures that de-emphasize quality assurance. We reported in 2007 that the GPS IIF program, the Space-Based Infrared High Satellite System, and the Wideband Global SATCOM system all encountered quality problems that could be partially attributed to industry consolidations. Specialized parts for the Wideband Global SATCOM system, for example, became difficult to obtain after smaller contractors that made these parts started to consolidate. For GPS, consolidations led to a series of moves in facilities that resulted in a loss of GPS technical expertise. In addition, during this period, the contractor took additional cost-cutting measures that reduced quality. Senior officials responsible for DOD space acquisition oversight with whom we spoke with for this review stated that prime space contractors have divested their traditional lines of expertise in favor of acting in a broader “system integrator” role. Meanwhile, smaller suppliers that attempted to fill gaps in expertise and products created by consolidations have not had the experience and knowledge needed to produce to the standards needed for government space systems. For instance, officials from one program told us that their suppliers were often unaware that their parts would be used in space applications and did not understand or follow certain requirements. Officials also mentioned that smaller suppliers attempting to enter the government space market do not have access to testing and other facilities needed to help build quality into their parts. We recently reported that small businesses typically do not own the appropriate testing facilities, such as thermal vacuum chambers, that are used for testing spacecraft or parts under a simulated space environment and instead must rely on government, university, or large contractor testing facilities, which can be costly. Government’s declining share of the overall electronic parts market: DOD and NASA officials also stated that the government’s declining share of the overall electronic parts market has made it more difficult to acquire qualified electronic parts. According to officials, the government used to be the primary consumer of microelectronics, but it now constitutes only a small percentage of the market. As such, the government cannot easily demand unique exceptions to commercial standards. An example of an exception is DOD’s standards for radiation-hardened parts, such as microelectronics, which are designed and fabricated with the specific goal of enduring the harshest space radiation environments, including nuclear events. We reported in 2010 that to produce such parts, companies would typically need to create separate production lines and in some cases special facilities. Another example is that government space programs often demand the use of a tin alloy (tin mixed with lead) for parts rather than pure tin because of the risk for growth of tin whiskers. According to officials, as a result of European environmental regulations, commercial manufacturers have largely moved away from the use of lead making it more difficult and costly to procure tin alloy parts, and increasing the risk of parts being made with pure tin. Similarly, officials noted concerns with the increased use of lead-free solders used in electronic parts. Moreover, officials told us that when programs do rely on commercial parts, there tends to be a higher risk of lot-to-lot variation, obsolescence, and a lack of part traceability. An increase in counterfeit electronic parts: Officials we spoke with agreed that an increase in counterfeit electronics parts has made efforts to address parts quality more difficult. “Counterfeit” generally refers to instances in which the identity or pedigree of a product is knowingly misrepresented by individuals or companies. A 2010 Department of Commerce study identified a growth in incidents of counterfeit parts across the electronics industry from about 3,300 in 2005 to over 8,000 incidents in 2008. We reported in 2010 that DOD is limited in its ability to determine the extent to which counterfeit parts exist in its supply chain because it does not have a departmentwide definition of “counterfeit” and a consistent means to identify instances of suspected counterfeit parts. Moreover, DOD relies on existing procurement and quality control practices to ensure the quality of the parts in its supply chain. However, these practices are not designed to specifically address counterfeit parts. Limitations in the areas of obtaining supplier visibility, investigating part deficiencies, and reporting and disposal may reduce DOD’s ability to mitigate risks posed by counterfeit parts. At the time of our review, DOD was only in the early stages of addressing counterfeiting. We recommended and DOD concurred that DOD leverage existing initiatives to establish anticounterfeiting guidance and disseminate this guidance to all DOD components and defense contractors. Space and missile systems must meet high standards for quality. The 2003 Defense Science Board put it best by noting that the “primary reason is that the space environment is unforgiving. Thousands of good engineering decisions can be undone by a single engineering flaw or workmanship error, resulting in the catastrophe of major mission failure. Options for correction are scant.” The number of parts problems identified in our review is relatively small when compared to the overall number of parts used. But these problems have been shown to have wide-ranging and significant consequences. Moreover, while the government’s reliance on space and missile systems has increased dramatically, attention and oversight of parts quality declined because of a variety of factors, including the implementation of TSPR and similar policies, workforce gaps, and industry consolidations. This condition has been recognized and numerous efforts have been undertaken to strengthen the government’s ability to detect and prevent parts problems. But there is no mechanism in place to periodically assess the condition of parts quality problems in major space and missile defense programs and the impact and effectiveness of corrective measures. Such a mechanism could help ensure that attention and resources are focused in the right places and provide assurance that progress is being made. We are making two recommendations to the Secretary of Defense and the NASA Administrator. We recommend that the Secretary of Defense and the Administrator of NASA direct appropriate agency executives to include in efforts to implement the new MOU for increased mission assurance a mechanism for a periodic, governmentwide assessment and reporting of the condition of parts quality problems in major space and missile defense programs. This should include the frequency such problems are appearing in major programs, changes in frequency from previous years, and the effectiveness of corrective measures. We further recommend that reports of the periodic assessments be made available to Congress. We provided draft copies of this report to DOD and NASA for review and comment. DOD and NASA provided written comments on a draft of this report. These comments are reprinted in appendixes III and IV, respectively. DOD and NASA also provided technical comments, which were incorporated as appropriate. DOD partially concurred with our recommendation to include in its efforts to implement the new MOU for increased mission assurance a mechanism for a periodic, governmentwide assessment and reporting of the condition of parts quality problems in major space and missile defense programs, to include the frequency problems are appearing, changes in frequency from previous years, and the effectiveness of corrective measures. DOD responded that it would work with NASA to determine the optimal governmentwide assessment and reporting implementation to include all quality issues, of which parts, materials, and processes would be one of the major focus areas. In addition, DOD proposed an annual reporting period to ensure planned, deliberate, and consistent assessments. We support DOD’s willingness to address all quality issues and to include parts, materials, and processes as an important focus area in an annual report. Recent cases of higher-level quality problems that did not fall within the scope of our review include MDA’s Terminal High Altitude Area Defense missile system and the Air Force’s Advanced Extremely High Frequency communications satellite, which were mentioned earlier in our report. It is our opinion that these cases occurred for reasons similar to those we identified for parts, materials, and processes. We recognize that quality issues can include a vast and complex universe of problems. Therefore, the scope of our review and focus of our recommendation was on parts, materials, and processes to enable consistent reporting and analysis and to help direct corrective actions. Should a broader quality focus be pursued, as DOD indicated, it is important that DOD identify ways in which this consistency can be facilitated among the agencies. In response to our second recommendation, DOD stated that it had no objection to providing a report to Congress, if Congress desired one. We believe that DOD should proactively provide its proposed annual reports to Congress on a routine basis, rather than waiting for any requests from Congress, which could be inconsistent from year to year. NASA also concurred with our recommendations. NASA stated that enhanced cross-agency communication, coordination, and sharing of parts quality information will help mitigate threats poses by defective and nonconforming parts. Furthermore, NASA plans to engage other U.S. space agencies to further develop and integrate agency mechanisms for reporting, assessing, tracking, and trending common parts quality problems, including validation of effective cross-agency solutions. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Defense, the Administrator of the National Aeronautics and Space Administration, and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are provided in appendix V. Our specific objectives were to assess (1) the extent to which parts quality problems are affecting Department of Defense (DOD) and National Aeronautics and Space Administration (NASA) space and missile defense programs; (2) the causes of these problems; and (3) initiatives to prevent, detect, and mitigate parts quality problems. To examine the extent to which parts quality problems are affecting DOD (the Air Force, the Navy, and the Missile Defense Agency (MDA)) and NASA cost, schedule, and performance of space and missile defense programs, we reviewed all 21 space and missile programs—9 at DOD, including 4 Air Force, 1 Navy, and 4 MDA systems, and 12 at NASA—that were, as of October 2009, in development and projected to be high cost, and had demonstrated through a critical design review (CDR) that the maturity of the design was appropriate to support proceeding with full- scale fabrication, assembly, integration, and test. DOD space systems selected were major defense acquisition programs— defined as those requiring an eventual total expenditure for research, development, test, and evaluation of more than $365 million or for procurement of more than $2.190 billion in fiscal year 2000 constant dollars. All four MDA systems met these same dollar thresholds. NASA programs selected had a life cycle cost exceeding $250 million. We chose these programs based on their cost, stage in the acquisition process—in development and post- CDR—and congressional interest. A quality problem was defined to be the degree to which the product attributes, such as capability, performance, or reliability, did not meet the needs of the customer or mission, as specified through the requirements definition and allocation process. For each of the 21 systems we examined program documentation, such as parts quality briefings, failure review board reports, advisory notices, and cost and schedule analysis reports and held discussions with quality officials from the program offices, including contractor officials and Defense Contract Management Agency officials, where appropriate. We specifically asked each program, at the time we initiated our review, to provide us with the most recent list of the top 5 to 10 parts, material or processes problems, as defined by that program, affecting its program’s cost, schedule, or performance. Based on additional information gathered through documentation provided by the programs and discussions with program officials, we reviewed each part problem reported by each program to determine if there was a part problem, rather than a material, process, component, or assembly level problem. In addition, when possible we identified the impact that a part, material, or process quality problem might have had on system cost, schedule, and performance. We selected one system with known quality problems, as previously reported in GAO reports, within the Air Force (Space-Based Space Surveillance System), MDA (Ground-Based Midcourse Defense), and NASA (Glory) for further review to gain greater insight into the reporting and root causes of the parts quality problems. Our findings are limited by the approach and data collected. Therefore, we were unable to make generalizable or projectable statements about space and missile programs beyond our scope. We also have ongoing work through our annual DOD assessments of selected weapon programs and NASA assessments of selected larger- scale projects for many of these programs, which allowed us to build upon our prior work efforts and existing DOD and NASA contacts. Programs selected are described in appendix II and are listed below. Advanced Extremely High Frequency Satellites Global Positioning System Block IIF Space-Based Infrared System High Program Space-Based Space Surveillance Block 10 Aegis Ballistic Missile Defense Ground-Based Midcourse Defense Space Tracking and Surveillance System Aquarius Global Precipitation Measurement Mission Glory Gravity Recovery and Interior Laboratory James Webb Space Telescope Juno Landsat Data Continuity Mission Magnetospheric Multiscale Mars Science Laboratory National Polar-orbiting Operational Environmental Satellite System Radiation Belt Storm Probes Tracking and Data Relay Satellite Replenishment DOD and NASA have access to one or more of the following databases used to report deficient parts: the Product Data Reporting and Evaluation Program, the Joint Deficiency Reporting System, and the Government Industry Data Exchange Program. We did not use these systems in our review because of the delay associated with obtaining current information and because it was beyond the scope of the review to assess the utility or effectiveness of these systems. To determine the causes behind the parts quality problems, we asked each program to provide an explanation of the root causes and contributing factors that may have led to each part problem reported. Based on the information we gathered, we grouped the root causes and contributing factors for each part problem. We reviewed program documentation, regulations, directives, instructions, and policies to determine how the Air Force, MDA, and NASA define and address parts quality. We interviewed senior DOD, MDA, and NASA headquarters officials, as well as system program and contractor officials from the Air Force, MDA, and NASA, about their knowledge of parts problems on their programs. We reviewed several studies on quality and causes from the Subcommittee on Technical and Tactical Intelligence, House Permanent Select Committee on Intelligence; the Department of Commerce; and the Aerospace Corporation to gain a better understanding of quality and challenges facing the development, acquisition, and execution of space systems. We met with Aerospace Corporation officials to discuss some of their reports and findings and the status of their ongoing efforts to address parts quality. We relied on previous GAO reports for the implementation status of planned program management improvements. To identify initiatives to prevent, detect, and mitigate parts quality problems, we asked each program what actions were being taken to remedy the parts problems. Through these discussions and others held with agency officials, we were able to obtain information on working groups. We reviewed relevant materials provided to us by officials from DOD, the Air Force, MDA, NASA, and the Aerospace Corporation. We interviewed program officials at the Air Force, MDA, NASA, and the Aerospace Corporation responsible for quality initiatives to discuss those initiatives that would pertain to parts quality and discuss the implementation status of any efforts. We conducted this performance audit from October 2009 to May 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Air Force’s AEHF satellite system will replenish the existing Milstar system with higher-capacity, survivable, jam-resistant, worldwide, secure communication capabilities for strategic and tactical warfighters. The program includes satellites and a mission control segment. Terminals used to transmit and receive communications are acquired separately by each service. AEHF is an international program that includes Canada, the United Kingdom, and the Netherlands. The Air Force’s GPS includes satellites, a ground control system, and user equipment. It conveys positioning, navigation, and timing information to users worldwide. In 2000, Congress began funding the modernization of Block IIR and Block IIF satellites. GPS IIF is a new generation of GPS satellites that is intended to deliver all legacy signals plus new capabilities, such as a new civil signal and better accuracy. The Navy’s MUOS, a satellite communication system, is expected to provide a worldwide, multiservice population of mobile and fixed-site terminal users with an increase in narrowband communications capacity and improve availability for small terminals. MUOS will replace the Ultra High Frequency Follow-On satellite system currently in operation and provide interoperability with legacy terminals. MUOS consists of a network of satellites and an integrated ground network. The Air Force’s SBIRS High satellite system is being developed to replace the Defense Support Program and perform a range of missile warning, missile defense, technical intelligence, and battlespace awareness missions. SBIRS High consists of four satellites in geosynchronous earth orbit plus two replenishment satellites, two sensors on host satellites in highly elliptical orbit plus two replenishment sensors, and fixed and mobile ground stations. The Air Force’s SBSS Block 10 satellite is intended to provide a follow-on capability to the Midcourse Space Experiment / Space Based Visible sensor satellite, which ended its mission in July 2008. SBSS will consist of a single satellite and associated command, control, communications, and ground processing equipment. The SBSS satellite is expected to operate 24 hours a day, 7 days a week, to collect positional and characterization data on earth-orbiting objects of potential interest to national security. MDA’s Aegis BMD is a sea-based missile defense system being developed in incremental, capability-based blocks to defend against ballistic missiles of all ranges. Key components include the shipboard SPY-1 radar, Standard Missile 3 (SM-3) missiles, and command and control systems. It will also be used as a forward-deployed sensor for surveillance and tracking of ballistic missiles. The SM-3 missile has multiple versions in development or production: Blocks IA, IB, and IIA. MDA’s GMD is being fielded to defend against limited long-range ballistic missile attacks during their midcourse phase. GMD consists of an interceptor with a three-stage booster and exoatmospheric kill vehicle, and a fire control system that formulates battle plans and directs components integrated with Ballistic Missile Defense System (BDMS) radars. We assessed the maturity of all GMD critical technologies, as well as the design of the Capability Enhanced II (CE-II) configuration of the Exoatmospheric Kill Vehicle (EKV), which began emplacements in fiscal year 2009. MDA’s STSS is designed to acquire and track threat ballistic missiles in all stages of flight. The agency obtained the two demonstrator satellites in 2002 from the Air Force SBIRS Low program that halted in 1999. MDA refurbished and launched the two STSS demonstrations satellites on September 25, 2009. Over the next 2 years, the two satellites will take part in a series of tests to demonstrate their functionality and interoperability with the BMDS. The Targets and Countermeasures program provides ballistic missiles to serve as targets in the MDA flight test program. The targets program involves multiple acquisitions—including a variety of existing and new missiles and countermeasures. Aquarius is a satellite mission developed by NASA and the Space Agency of Argentina (Comisión Nacional de Actividades Espaciales) to investigate the links between the global water cycle, ocean circulation, and the climate. It will measure global sea surface salinity. The Aquarius science goals are to observe and model the processes that relate salinity variations to climatic changes in the global cycling of water and to understand how these variations influence the general ocean circulation. By measuring salinity globally for 3 years, Aquarius will provide a new view of the ocean’s role in climate. The GPM mission, a joint NASA and Japan Aerospace Exploration Agency project, seeks to improve the scientific understanding of the global water cycle and the accuracy of precipitation forecasts. GPM is composed of a core spacecraft carrying two main instruments: a dual-frequency precipitation radar and a GPM microwave imager. GPM builds on the work of the Tropical Rainfall Measuring Mission and will provide an opportunity to calibrate measurements of global precipitation. The Glory project is a low-Earth orbit satellite that will contribute to the U.S. Climate Change Science Program. The satellite has two principal science objectives: (1) collect data on the properties of aerosols and black carbon in the Earth’s atmosphere and climate systems and (2) collect data on solar irradiance. The satellite has two main instruments —the Aerosol Polarimetry Sensor (APS) and the Total Irradiance Monitor (TIM)—as well as two cloud cameras. The TIM will allow NASA to have uninterrupted solar irradiance data by bridging the gap between NASA’s Solar Radiation and Climate Experiment and the National Polar-orbiting Operational Environmental Satellite System. The Glory satellite failed to reach orbit when it was launched on March 4, 2011. The GRAIL mission will seek to determine the structure of the lunar interior from crust to core, advance our understanding of the thermal evolution of the moon, and extend our knowledge gained from the moon to other terrestrial-type planets. GRAIL will achieve its science objectives by placing twin spacecraft in a low altitude and nearly circular polar orbit. The two spacecraft will perform high- precision measurements between them. Analysis of changes in the spacecraft-to-spacecraft data caused by gravitational differences will provide direct and precise measurements of lunar gravity. GRAIL will ultimately provide a global, high-accuracy, high- resolution gravity map of the moon. The JWST is a large, infrared-optimized space telescope that is designed to find the first galaxies that formed in the early universe. Its focus will include searching for first light, assembly of galaxies, origins of stars and planetary systems, and origins of the elements necessary for life. JWST’s instruments will be designed to work primarily in the infrared range of the electromagnetic spectrum, with some capability in the visible range. JWST will have a large mirror, 6.5 meters (21.3 feet) in diameter and a sunshield the size of a tennis court. Both the mirror and sunshade will not fit onto the rocket fully open, so both will fold up and open once JWST is in outer space. JWST will reside in an orbit about 1.5 million kilometers (1 million miles) from the Earth. The Juno mission seeks to improve our understanding of the origin and evolution of Jupiter. Juno plans to achieve its scientific objectives by using a simple, solar-powered spacecraft to make global maps of the gravity, magnetic fields, and atmospheric conditions of Jupiter from a unique elliptical orbit. The spacecraft carries precise, highly sensitive radiometers, magnetometers, and gravity science systems. Juno is slated to make 32 orbits to sample Jupiter’s full range of latitudes and longitudes. From its polar perspective, Juno is designed to combine local and remote sensing observations to explore the polar magnetosphere and determine what drives Jupiter’s auroras. The LDCM, a partnership between NASA and the U.S. Geological Survey, seeks to extend the ability to detect and quantitatively characterize changes on the global land surface at a scale where natural and man-made causes of change can be detected and differentiated. It is the successor mission to Landsat 7. The Landsat data series, begun in 1972, is the longest continuous record of changes in the Earth’s surface as seen from space. Landsat data are a resource for people who work in agriculture, geology, forestry, regional planning, education, mapping, and global change research. The MMS is made up of four identically instrumented spacecraft. The mission will use the Earth’s magnetosphere as a laboratory to study the microphysics of magnetic reconnection, energetic particle acceleration, and turbulence. Magnetic reconnection is the primary process by which energy is transferred from solar wind to Earth’s magnetosphere and is the physical process determining the size of a space weather storm. The spacecrafts will fly in a pyramid formation, adjustable over a range of 10 to 400 kilometers, enabling them to capture the three-dimensional structure of the reconnection sites they encounter. The data from MMS will be used as a basis for predictive models of space weather in support of exploration. The MSL is part of the Mars Exploration Program (MEP). The MEP seeks to understand whether Mars was, is, or can be a habitable world. To answer this question, the MSL project will investigate how geologic, climatic, and other processes have worked to shape Mars and its environment over time, as well as how they interact today. The MSL will continue this systematic exploration by placing a mobile science laboratory on the Mars surface to assess a local site as a potential habitat for life, past or present. The MSL is considered one of NASA’s flagship projects and will be the most advanced rover yet sent to explore the surface of Mars. The National Polar-orbiting Operational Environmental Satellite System NPP is a joint mission with the National Oceanic and Atmospheric Administration and the U.S. Air Force. The satellite will measure ozone, atmospheric and sea surface temperatures, land and ocean biological productivity, Earth radiation, and cloud and aerosol properties. The NPP mission has two objectives. First, NPP will provide a continuation of global weather observations following the Earth Observing System missions Terra and Aqua. Second, NPP will function as an operational satellite and will provide data until the first NPOESS satellite launches. The RBSP mission will explore the sun’s influence on the Earth and near- Earth space by studying the planet’s radiation belts at various scales of space and time. This insight into the physical dynamics of the Earth’s radiation belts will provide scientists data with which to predict changes in this little understood region of space. Understanding the radiation belt environment has practical applications in the areas of spacecraft system design, mission planning, spacecraft operations, and astronaut safety. The two spacecrafts will measure the particles, magnetic and electric fields, and waves that fill geospace and provide new knowledge on the dynamics and extremes of the radiation belts. The TDRS replenishment system consists of in-orbit communication satellites stationed at geosynchronous altitude coupled with two ground stations located in New Mexico and Guam. The satellite network and ground stations provide mission services for near-Earth user satellites and orbiting vehicles. TDRS K and L are the 11th and 12th satellites, respectively, to be built for the TDRS replenishment system and will contribute to the existing network by providing high bandwidth digital voice, video, and mission payload data, as well as health and safety data relay services to Earth-orbiting spacecraft, such as the International Space Station. In addition to the contact named above, David B. Best, Assistant Director; Maricela Cherveny; Heather L. Jensen; Angie Nichols-Friedman; William K. Roberts; Roxanna T. Sun; Robert S. Swierczek; and Alyssa B. Weir made key contributions to this report. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-10-388SP. Washington, D.C.: March 30, 2010. Best Practices: Increased Focus on Requirements and Oversight Needed to Improve DOD’s Acquisition Environment and Weapon System Quality. GAO-08-294. Washington, D.C.: February 1, 2008. Best Practices: An Integrated Portfolio Management Approach to Weapon System Investments Could Improve DOD’s Acquisition Outcomes. GAO-07-388. Washington, D.C.: March 30, 2007. Best Practices: Stronger Practices Needed to Improve DOD Technology Transition Processes. GAO-06-883. Washington, D.C.: September 14, 2006. Best Practices: Better Support of Weapon System Program Managers Needed to Improve Outcomes. GAO-06-110. Washington, D.C.: November 30, 2005. Best Practices: Setting Requirements Differently Could Reduce Weapon Systems’ Total Ownership Costs. GAO-03-57. Washington, D.C.: February 11, 2003. Best Practices: Capturing Design and Manufacturing Knowledge Early Improves Acquisition Outcomes. GAO-02-701. Washington, D.C.: July 15, 2002. Defense Acquisitions: DOD Faces Challenges in Implementing Best Practices. GAO-02-469T. Washington, D.C.: February 27, 2002. Best Practices: Better Matching of Needs and Resources Will Lead to Better Weapon System Outcomes. GAO-01-288. Washington, D.C.: March 8, 2001. Best Practices: A More Constructive Test Approach Is Key to Better Weapon System Outcomes. GAO/NSIAD-00-199. Washington, D.C.: July 31, 2000. Defense Acquisition: Employing Best Practices Can Shape Better Weapon System Decisions. GAO/T-NSIAD-00-137. Washington, D.C.: April 26, 2000. Best Practices: Better Management of Technology Development Can Improve Weapon System Outcomes. GAO/NSIAD-99-162. Washington, D.C.: July 30, 1999. Defense Acquisition: Best Commercial Practices Can Improve Program Outcomes. GAO/T-NSIAD-99-116. Washington, D.C.: March 17, 1999. Best Practices: Successful Application to Weapon Acquisitions Requires Changes in DOD’s Environment. GAO/NSIAD-98-56. Washington, D.C.: February 24, 1998. Global Positioning System: Challenges in Sustaining and Upgrading Capabilities Persist. GAO-10-636. Washington, D.C.: September 15, 2010. Polar-Orbiting Environmental Satellites: Agencies Must Act Quickly to Address Risks That Jeopardize the Continuity of Weather and Climate Data. GAO-10-558. Washington, D.C.: May 27, 2010. Space Acquisitions: DOD Poised to Enhance Space Capabilities, but Persistent Challenges Remain in Developing Space Systems. GAO-10-447T. Washington, D.C.: March 10, 2010. Space Acquisitions: Government and Industry Partners Face Substantial Challenges in Developing New DOD Space Systems. GAO-09-648T. Washington, D.C.: April 30, 2009. Space Acquisitions: Uncertainties in the Evolved Expendable Launch Vehicle Program Pose Management and Oversight Challenges. GAO-08-1039. Washington, D.C.: September 26, 2008. Defense Space Activities: National Security Space Strategy Needed to Guide Future DOD Space Efforts. GAO-08-431R. Washington, D.C.: March 27, 2008. Space Acquisitions: Actions Needed to Expand and Sustain Use of Best Practices. GAO-07-730T. Washington, D.C.: April 19, 2007. Defense Acquisitions: Assessment of Selected Major Weapon Programs. GAO-06-391. Washington, D.C.: March 31, 2006. Space Acquisitions: DOD Needs to Take More Action to Address Unrealistic Initial Cost Estimates of Space Systems. GAO-07-96. Washington, D.C.: November 17, 2006. Defense Space Activities: Management Actions Are Needed to Better Identify, Track, and Train Air Force Space Personnel. GAO-06-908. Washington, D.C.: September 21, 2006. Space Acquisitions: Improvements Needed in Space Systems Acquisitions and Keys to Achieving Them. GAO-06-626T. Washington, D.C.: April 6, 2006. Space Acquisitions: Stronger Development Practices and Investment Planning Needed to Address Continuing Problems. GAO-05-891T. Washington, D.C.: July 12, 2005. Defense Acquisitions: Risks Posed by DOD’s New Space Systems Acquisition Policy. GAO-04-379R. Washington, D.C.: January 29, 2004. Defense Acquisitions: Improvements Needed in Space Systems Acquisition Management Policy. GAO-03-1073. Washington, D.C.: September 15, 2003. Military Space Operations: Common Problems and Their Effects on Satellite and Related Acquisitions. GAO-03-825R. Washington, D.C.: June 2, 2003. Defense Space Activities: Organizational Changes Initiated, but Further Management Actions Needed. GAO-03-379. Washington, D.C.: April 18, 2003. Missile Defense: European Phased Adaptive Approach Acquisitions Face Synchronization, Transparency, and Accountability Challenges. GAO-11-179R. Washington, D.C.: December 21, 2010. Defense Acquisitions: Missile Defense Program Instability Affects Reliability of Earned Value Management Data. GAO-10-676. Washington, D.C.: July 14, 2010. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-10-388SP. Washington, D.C.: March 30, 2010. Missile Defense: DOD Needs to More Fully Assess Requirements and Establish Operational Units before Fielding New Capabilities. GAO-09-856. Washington, D.C.: September 16, 2009. Ballistic Missile Defense: Actions Needed to Improve Planning and Information on Construction and Support Costs for Proposed European Sites. GAO-09-771. Washington, D.C.: August 6, 2009. Defense Management: Key Challenges Should be Addressed When Considering Changes to Missile Defense Agency’s Roles and Missions. GAO-09-466T. Washington, D.C.: March 26, 2009. Defense Acquisitions: Production and Fielding of Missile Defense Components Continue with Less Testing and Validation Than Planned. GAO-09-338. Washington, D.C.: March 13, 2009. Missile Defense: Actions Needed to Improve Planning and Cost Estimates for Long-Term Support of Ballistic Missile Defense. GAO-08-1068. Washington, D.C.: September 25, 2008. Ballistic Missile Defense: Actions Needed to Improve Process for Identifying and Addressing Combatant Command Priorities. GAO-08-740. Washington, D.C.: July 31, 2008. Defense Acquisitions: Progress Made in Fielding Missile Defense, but Program Is Short of Meeting Goals. GAO-08-448. Washington, D.C.: March 14, 2008. Defense Acquisitions: Missile Defense Agency’s Flexibility Reduces Transparency of Program Cost. GAO-07-799T. Washington, D.C.: April 30, 2007. | Quality is key to success in U.S. space and missile defense programs, but quality problems exist that have endangered entire missions along with less-visible problems leading to unnecessary repair, scrap, rework, and stoppage; long delays; and millions in cost growth. For space and missile defense acquisitions, GAO was asked to examine quality problems related to parts and manufacturing processes and materials across DOD and NASA. GAO assessed (1) the extent to which parts quality problems affect those agencies' space and missile defense programs; (2) causes of any problems; and (3) initiatives to prevent, detect, and mitigate parts quality problems. To accomplish this, GAO reviewed all 21 systems with mature designs and projected high costs: 5 DOD satellite systems, 4 DOD missile defense systems, and 12 NASA systems. GAO reviewed existing and planned efforts for preventing, detecting, and mitigating parts quality problems. Further, GAO reviewed regulations, directives, instructions, policies, and several studies, and interviewed senior headquarters and contractor officials. Parts quality problems affected all 21 programs GAO reviewed at the Department of Defense (DOD) and National Aeronautics and Space Administration (NASA). In some cases they contributed to significant cost overruns and schedule delays. In most cases, problems were associated with electronic versus mechanical parts or materials. In several cases, parts problems discovered late in the development cycle had more significant cost and schedule consequences. For example, one problem cost a program at least $250 million and about a 2-year launch delay. The causes of parts quality problems GAO identified were poor workmanship, undocumented and untested manufacturing processes, poor control of those processes and materials and failure to prevent contamination, poor part design, design complexity, and an inattention to manufacturing risks. Ineffective supplier management also resulted in concerns about whether subcontractors and contractors met program requirements. Most programs GAO reviewed began before the agencies adopted new policies related to parts quality problems, and newer post-policy programs were not mature enough for parts problems to be apparent. Agencies and industry are now collecting and sharing information about potential problems, and developing guidance and criteria for testing parts, managing subcontractors, and mitigating problems, but it is too early to determine how much such collaborations have reduced parts quality problems since such data have not been historically collected. New efforts are collecting data on anomalies, but no mechanism exists to use those data to assess improvements. Significant barriers hinder efforts to address parts quality problems, such as broader acquisition management problems, workforce gaps, diffuse leadership in the national security space community, the government's decreasing influence on the electronic parts market, and an increase in counterfeiting of electronic parts. Given this, success will likely be limited without continued assessments of what works well and must be done. DOD and NASA should implement a mechanism for periodic assessment of the condition of parts quality problems in major space and missile defense programs with periodic reporting to Congress. DOD partially agreed with the recommendation and NASA agreed. DOD agreed to annually address all quality issues, to include parts quality. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Influenza is more severe than some viral respiratory infections, such as the common cold. During an annual influenza season, most people who contract influenza recover completely in 1 to 2 weeks, but some develop serious and potentially life-threatening medical complications, such as pneumonia. People aged 65 years and older, people of any age with chronic medical conditions, children younger than 2 years, and pregnant women are generally more likely than others to develop severe complications from influenza. In an average year in the United States, more than 36,000 individuals die and more than 200,000 are hospitalized from influenza and related complications. Pandemic influenza differs from annual influenza in several ways. According to the World Health Organization, pandemic influenza spreads to all parts of the world very quickly, usually in less than a year, and can sicken more than a quarter of the global population, including young, healthy individuals. Although health experts cannot predict with certainty which strain of influenza virus will be involved in the next pandemic, they warn that the avian influenza virus identified in the human cases in Asia, known as H5N1, could lead to a pandemic if it acquires the genetic ability, so far absent, to spread quickly from person to person. Vaccination is the primary method for preventing influenza and its complications. Produced in a complex process that involves growing viruses in millions of fertilized chicken eggs, influenza vaccine is administered each year to protect against particular influenza strains expected to be prevalent that year. Experience has shown that vaccine production generally takes 6 or more months after a virus strain has been identified; vaccines for certain influenza strains have been difficult to mass-produce. After vaccination for the annual influenza season, it takes about 2 weeks for the body to produce the antibodies that protect against infection. According to CDC recommendations, the optimal time for annual vaccination is October through November. Because the annual influenza season typically does not peak until January or February, however, in most years vaccination in December or later can still be beneficial. At present, two vaccine types are recommended for protection against influenza in the United States: an inactivated virus vaccine injected into muscle and a live virus vaccine administered as a nasal spray. The injectable vaccine—which represents the large majority of influenza vaccine administered in this country—can be used to immunize both healthy individuals and individuals at highest risk for severe complications, including those with chronic illness and those aged 65 years and older. The nasal spray vaccine, in contrast, is currently approved for use only among healthy individuals aged 5 to 49 years who are not pregnant. For the 2003–04 influenza season, two manufacturers—one with production facilities in the United States (sanofi pasteur) and one with production facilities in the United Kingdom (Chiron)—produced about 83 million doses of injectable vaccine, which represented about 96 percent of the U.S. vaccine supply. A third U.S. manufacturer (MedImmune) produced the nasal spray vaccine. For the 2004–05 influenza season, CDC and its Advisory Committee on Immunization Practices (ACIP) initially recommended vaccination for about 188 million people in designated priority groups, including roughly 85 million people at high risk for severe complications. On October 5, 2004, however, Chiron announced that it could not provide its expected production of 46–48 million doses—about half the expected U.S. influenza vaccine supply. Although vaccination is the primary strategy for protecting individuals who are at greatest risk of severe complications and death from influenza, antiviral drugs can also help to treat infection. If taken within 2 days of a person’s becoming ill, these drugs can ease symptoms and reduce contagion. In the event of a pandemic, such drugs could lower the number of deaths until a pandemic influenza vaccine became available. Four antiviral drugs have been approved by the Food and Drug Administration (FDA) for treatment of influenza: amantadine, rimantadine, oseltamivir, and zanamivir. HHS has primary responsibility for coordinating the nation’s response to public health emergencies. Within HHS, CDC is one of the agencies that protect the nation’s health and safety. CDC’s activities include efforts to prevent and control diseases and to respond to public health emergencies. CDC and ACIP recommend which population groups should be targeted for vaccination each year and, when vaccine supply allows, recommend that any person who wishes to decrease his or her risk of influenza be vaccinated. In addition, the National Vaccine Program Office is responsible for coordinating and ensuring collaboration among the many federal agencies involved in vaccine and immunization activities; the office also issued a draft national pandemic influenza preparedness plan in August 2004. Preparing for and responding to an influenza pandemic differ in several respects from preparing for and responding to an annual influenza season. For example, past influenza pandemics have affected healthy young adults who are not typically at high risk for severe influenza-related complications, so the groups given priority for early vaccination may differ from those given priority in an annual influenza season. In addition, according to CDC, a vaccine probably would not be available in the early stages of a pandemic. Shortages of vaccine would therefore be likely during a pandemic, potentially creating a situation more challenging than a shortage of vaccine for an annual influenza season. One lesson learned from the 2004–05 season that is relevant to a future vaccine shortage in either an annual influenza season or a pandemic is the importance of planning before a shortage occurs. At the time the influenza vaccine shortage became apparent, the nation lacked a contingency plan specifically designed to respond to a severe vaccine shortage. The absence of such a plan led to delays and uncertainty on the part of many state and local entities on how best to ensure access to vaccine during the shortage by individuals at high risk of severe complications and others in priority groups. Faced with the unanticipated shortfall, CDC redefined the priority groups it had recommended for vaccination and asked sanofi pasteur, the remaining manufacturer of injectable vaccine, to suspend distribution until the agency completed its assessment of the shortage’s extent and developed a plan to distribute the manufacturer’s remaining vaccine to providers serving individuals in the priority groups. Developing and implementing this distribution plan took time and led to delays in response and some confusion at state and local levels. Our work showed that several areas of planning are particularly important for enhancing preparedness before a similar situation occurs in the future, including defining the responsibilities of federal, state, and local officials; using emergency preparedness plans and emergency health directives; and facilitating the distribution and administration of vaccine. Clearly defining responsibilities of federal, state, and local officials can minimize confusion. During the 2004–05 vaccine shortage, even though CDC worked with states and localities to coordinate roles and responsibilities, problems occurred. For example, CDC worked with national professional associations to survey long-term-care providers throughout the country to determine if seniors had adequate access to vaccine. Maine and other states, however, also surveyed their long-term- care providers to make the same determination. This duplication of effort expended additional resources, burdened some long-term-care providers in the states, and created confusion. Emergency preparedness plans help coordinate local response. State and local health officials in several locations we visited reported that using existing emergency plans or incident command centers (the organizational systems set up specifically to handle the response to emergency situations) helped coordinate effective local responses to the vaccine shortage. For example, public health officials from Seattle–King County said that using the county’s incident command system played a vital role in coordinating an effective and timely local response and in communicating a clear message to the public and providers. In addition, according to public health officials, emergency public health directives helped ensure access to vaccine by supporting providers in enforcing the CDC recommendations and in helping to prevent price gouging in certain states. Partnerships between the public and private sectors can facilitate distribution and administration of vaccine. In San Diego County, California, for example, local health officials worked with a coalition of partners in public health, private businesses, and nonprofit groups throughout the county. Other mechanisms facilitated administering the limited supply of influenza vaccine to those in high-risk or other priority groups. In Stearns County, Minnesota, for example, public health officials worked with private providers to implement a system of vaccination by appointment. Rather than standing in long lines for vaccination, individuals with appointments went to a clinic during a given time slot. Although an influenza pandemic may differ in some ways from an annual influenza season, experience during the 2004–05 shortage illustrated the importance of having contingency plans in place ahead of time to prevent delays when timing is critical. Some health officials indicated that, as a result of the experience with the influenza vaccine shortage, they were revising state and local preparedness plans or modifying command center protocols to prepare for future emergencies. For example, experiences during the 2004–05 influenza season led Maine state officials to recognize the need to speed completion of their pandemic influenza preparedness plan. Over the past 5 years, we have reported on the importance of planning to address critical issues such as how vaccine will be purchased and distributed; how population groups will be given priority for vaccination; and how federal resources should be deployed before the nation faces a pandemic. We have also urged HHS to complete its pandemic preparedness and response plan, which the department released in draft form in August 2004. This draft plan described options for vaccine purchase and distribution and provided planning guidance to state and local health departments. As we testified earlier, however, the draft plan lacked clear guidance on potential priority groups for vaccination in a pandemic, and key questions remained about the federal role in purchasing and distributing vaccine. The experience in 2004–05 also highlighted the importance of finalizing such planning details. On November 2, 2005, HHS released its pandemic influenza plan. We did not, however, have an opportunity to review the plan before issuing this statement to determine whether the plan addresses these critical issues. A second lesson from the experience of the 2004–05 vaccine shortage that is relevant to future vaccine shortages in either an annual influenza season or a pandemic is the importance of streamlined mechanisms to make vaccine available in an expedited manner. For example, HHS began efforts to purchase foreign vaccine that was licensed for use in other countries but not the United States shortly after learning in October 2004 that Chiron would not supply any vaccine. The purchase, however, took several months to complete, and so vaccine was not available to meet the fall 2004 demand; by the end of the season, this vaccine had not been used. In addition, recipients of this foreign vaccine could have been required to sign a consent form and follow up with a health care worker after vaccination—steps that, according to health officials we interviewed in several states, would be too cumbersome to administer. Some states’ experience during the 2004–05 vaccine shortage also highlighted the importance of mechanisms to transfer available vaccine quickly and easily from one state to another; the lack of mechanisms to do so delayed redistribution to some states. During the 2004–05 shortage, some state health officials reported problems with their ability to purchase vaccine, both in paying for vaccine and in administering the transfer process. Minnesota, for example, tried to sell its available vaccine to other states seeking additional vaccine for their priority populations. According to federal and state health officials, however, certain states lacked the funding or flexibility under state law to purchase the vaccine when Minnesota offered it. As we have previously testified, establishing the funding sources, authority, or processes for quick public-sector purchases may be needed as part of pandemic preparedness. Recognizing the need for mechanisms to make vaccine available in a timely manner in the event of a pandemic, HHS has taken some action to address the fragility of the current influenza vaccine market. In its budget request for fiscal year 2006, CDC requested $30 million to enter into guaranteed-purchase contracts with vaccine manufacturers to help ensure vaccine supply. According to the agency, maintaining an abundant supply of annual influenza vaccine is critically important for improving the nation’s preparedness for an influenza pandemic. HHS is also taking steps toward developing a supply of vaccine to protect against avian influenza strains that could be involved in a pandemic. Experience during the 2004–05 shortage also illustrated the critical role communication plays when demand for vaccine exceeds supply and information about future vaccine availability is uncertain, as could happen in a future annual influenza season or a pandemic. During the 2004–05 shortage, CDC communicated regularly through a variety of media as the situation evolved. State and local officials, however, identified several communication lessons for future seasons or if an influenza pandemic occurred: Consistency among federal, state, and local communications is critical for averting confusion. State health officials reported several cases where inconsistent messages created confusion. Health officials in California, for example, reported that local radio stations in the state were running two public service announcements simultaneously—one from CDC advising those aged 65 years and older to be vaccinated, and one from the state advising those aged 50 years and older to be vaccinated. Disseminating clear, updated information is especially important when responding to changing circumstances. Beginning in October 2004, CDC asked individuals who were not in a high-risk group or another priority group to forgo or defer vaccination; this message, however, did not include instructions to check back with their providers later in the season, when more vaccine had become available. According to CDC, an estimated 17.5 million individuals specifically deferred vaccination to save vaccine for those in priority groups; local health officials said that many did not return when vaccine became available. Using diverse media helps reach diverse audiences. During the 2004–05 influenza season, public health officials emphasized the value of a variety of communication methods—such as telephone hotlines, Web sites, and bilingual radio advertisements—to reach as many individuals as possible and to increase the effectiveness of local efforts to raise vaccination rates. In Seattle–King County, Washington, for example, health department officials reported that a telephone hotline was important because some seniors did not have Internet access. Public health officials in Miami-Dade County, Florida, said that bilingual radio advertisements promoting influenza vaccine for those in priority groups helped increase the effectiveness of local efforts to raise vaccination rates. Education can alert providers and the public to prevention alternatives. In the 2004–05 shortage, some of the nasal spray vaccine for healthy individuals went unused, in part because of fears that the vaccine was too new and untested or that the live virus in the nasal spray could be transmitted to others. Further, public health officials we interviewed said that education about all available forms of prevention, including the use of antiviral medications and good hygiene practices, can help reduce the spread of influenza. Experience during the 2004–05 influenza vaccine shortage highlights the need to prepare the nation for handling future shortages in either an annual influenza season or an influenza pandemic. In particular, that season’s shortage emphasized the vital need for early planning, mechanisms to make vaccine available, and effective communication to ensure available vaccine is targeted to those who need it most. As our work over the past 5 years has noted, it is important for federal, state, and local governments to develop and communicate plans regarding critical issues—such as how vaccine will be purchased and distributed, which population groups are likely to have priority for vaccination, and what communication strategies are most effective—before we face another shortage of annual influenza vaccine or, worse, an influenza pandemic. For further information about this statement, please contact Marcia Crosse at (202) 512-7119 or [email protected]. Kim Yamane, Assistant Director; George Bogart; Ellen W. Chu; Nicholas Larson; Jennifer Major; and Terry Saiki made key contributions to this statement. Influenza Vaccine: Shortages in 2004–05 Season Underscore Need for Better Preparation. GAO-05-984. Washington, D.C.: September 30, 2005. Influenza Pandemic: Challenges in Preparedness and Response. GAO-05- 863T. Washington, D.C.: June 30, 2005. Influenza Pandemic: Challenges Remain in Preparedness. GAO-05-760T. Washington, D.C.: May 26, 2005. Flu Vaccine: Recent Supply Shortages Underscore Ongoing Challenges. GAO-05-177T. Washington, D.C.: November 18, 2004. Infectious Disease Preparedness: Federal Challenges in Responding to Influenza Outbreaks. GAO-04-1100T. Washington, D.C.: September 28, 2004. Public Health Preparedness: Response Capacity Improving, but Much Remains to Be Accomplished. GAO-04-458T. Washington, D.C.: February 12, 2004. Flu Vaccine: Steps Are Needed to Better Prepare for Possible Future Shortages. GAO-01-786T. Washington, D.C.: May 30, 2001. Flu Vaccine: Supply Problems Heighten Need to Ensure Access for High- Risk People. GAO-01-624. Washington, D.C.: May 15, 2001. Influenza Pandemic: Plan Needed for Federal and State Response. GAO- 01-4. Washington, D.C.: October 27, 2000. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Concern has been rising about the nation's preparedness to respond to vaccine shortages that could occur in future annual influenza seasons or during an influenza pandemic--a global influenza outbreak. Although the timing or extent of a future influenza pandemic cannot be predicted, studies suggest that its effect in the United States could be severe, and shortages of vaccine could occur. For the 2004-05 annual influenza season, the nation lost about half its expected influenza vaccine supply when one of two major manufacturers announced in October 2004 that it would not release any vaccine. GAO examined federal, state, and local actions taken in response to the shortage, including lessons learned. The nation's experience during the unexpected 2004-05 vaccine shortfall offers insights into some of the challenges that government entities will face in a pandemic. GAO was asked to provide a statement on lessons learned from the 2004-05 vaccine shortage and their relevance to planning and preparing for similar situations in the future, including an influenza pandemic. This statement is based on a GAO report, Influenza Vaccine: Shortages in 2004-05 Season Underscore Need for Better Preparation (GAO-05-984), and on previous GAO reports and testimonies about influenza vaccine supply and pandemic preparedness. A number of lessons emerged from federal, state, and local responses to the 2004-05 influenza vaccine shortage that carry implications for handling future vaccine shortages in either an annual influenza season or an influenza pandemic. First, limited contingency planning slows response. At the start of the 2004-05 influenza season, when the supply shortfall became apparent, the nation lacked a contingency plan specifically to address severe shortages. The absence of such a plan led to delays and uncertainties on the part of state and local public health entities on how best to ensure access to vaccine by individuals at high risk of severe influenza-related complications. Second, streamlined mechanisms to expedite vaccine availability are key to an effective response. During the 2004-05 shortage, for example, federal purchases of vaccine licensed for use in other countries but not the United States were not completed in time to meet peak demand. Some states' experience also highlighted the importance of mechanisms to transfer available vaccine quickly and easily from one state to another. Third, effective response requires clear and consistent communication. Consistency among federal, state, and local communications is critical for averting confusion. State and local health officials also emphasized the value of updated information when responding to changing circumstances, using diverse media to reach diverse audiences, and educating providers and the public about prevention alternatives. Over the past 5 years, GAO has urged the Department of Health and Human Services (HHS) to complete its plan to prepare for and respond to an influenza pandemic. GAO has reported on the importance of planning to address critical issues such as how vaccine will be purchased and distributed; how population groups will be given priority for vaccination; and how federal resources should be deployed before the nation faces a pandemic. On November 2, 2005, HHS released its pandemic influenza plan. GAO did not have the opportunity to review the plan before issuing this statement to determine the extent to which the plan addresses these critical issues. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Police Corps program was established to provide federal financial assistance to (1) prospective police officers who participate in the program (i.e., in the form of college scholarships for baccalaureate or graduate studies); (2) the entity selected and approved to provide basic training to the state’s Police Corps participants, either prior to or following completion of a bachelor’s degree; (3) the state and local law enforcement agencies that ultimately hire these individuals (i.e., they receive $10,000 per year during each of a participant’s first 4 years on the force); and (4) the dependent children of fallen officers. As of September 30, 1999, Police Corps programs were approved for 24 states and the Virgin Islands. Congress first appropriated funding of $10 million for the Police Corps program in fiscal year 1996. Police Corps funding increased to $20 million in fiscal year 1997 and to $30 million each in fiscal years 1998 and 1999. For fiscal year 2000, the appropriation directed that $30 million of available unobligated balances from COPS program funds were to be used for the Police Corps. As currently operated under OJP, the Office of the Police Corps provides funds to participating states, who in turn provide the funds to individual program participants, colleges, approved law enforcement training providers, and law enforcement agencies. In states that wish to participate, the governors must designate a lead agency that will submit a state plan to the Office of the Police Corps and administer the program in the state. Each year the Police Corps invites submission of state Police Corps program plans through a letter to the governor of each state and the appropriate official in the other eligible jurisdictions. States already approved for the program are to submit plans that describe their status, progress, and need for additional participants. Other states apply to participate by submitting a comprehensive state plan. The state plan must provide that the designated state lead agency will work in cooperation with local law enforcement liaisons, representatives of police labor and management organizations, and other appropriate agencies to develop and implement interagency agreements. The state also must agree to advertise the availability of Police Corps funds and make special efforts to seek applicants among members of all racial, ethnic, and gender groups but may not deviate from competitive standards for selection. DOJ originally placed the Office of the Police Corps under COPS, which DOJ established in 1994 pursuant to statute with the goal of funding 100,000 new community police officers by the end of the year 2000. However, because the COPS program is legislatively scheduled to end at the close of fiscal year 2000, DOJ asked for and received approval in the Conference report accompanying the Fiscal Year 1999 Omnibus Consolidated and Emergency Appropriation Bill to transfer the Office of the Police Corps to OJP. This transfer took place on December 10, 1998. To determine the extent of, and causes for, delays in Police Corps implementation, we (1) assessed COPS’ and OJP’s respective financial and management practices, (2) reviewed COPS’ and OJP’s respective legal interpretations of Police Corps’ statutory authority, (3) analyzed COPS and OJP reimbursement payment data, (4) reviewed program files at COPS and OJP, and (5) interviewed current and former Police Corps program officials as well as DOJ officials responsible for oversight. To obtain certain states’ perspective on implementation delays, we visited four states— Florida, Maryland, Oregon, and Texas. We selected Maryland and Oregon because they started their programs during the first year that the Police Corps program was funded and received the most funding. We selected Florida because a state university had been delegated state lead agency responsibility. We selected Texas because it experienced difficulty becoming fully operational due to issues concerning training program requirements. In each state we interviewed program officials representing the lead agency and the training program; in Maryland and Oregon, we interviewed representatives of law enforcement agencies that had employed Police Corps graduates. To broaden our understanding of the implementation of the Police Corps program, we also conducted structured telephone interviews with Police Corps lead agency representatives of the other 19 states participating in the program at that time (see app. III for the questions we asked). We asked officials to rate possible program problem areas on a four-point scale ranging from “not a reason” to a “very major reason.” Additionally, we conducted telephone interviews with cognizant officials in the governors’ offices of 12 nonparticipating states (see app. IV for the questions we asked). We used the same four-point scale that was used with the participating states to determine whether the possible problems affected program participation. We included an open-ended question that gave respondents the opportunity to identify problem areas not included among those we listed. To obtain information on the provision of Police Corps basic law enforcement training, determine how much assistance was being provided to law enforcement agencies and what it was being used for, and determine how many scholarships had been awarded to dependent children of fallen officers, we reviewed files and interviewed officials at COPS and OJP. In addition, we reviewed Police Corps program legislation, program guidance, correspondence files, participating states’ files, and available studies of the Police Corps program. We also interviewed current and former COPS officials and current officials at OJP. We performed our work between March 1999 and January 2000 in accordance with generally accepted government auditing standards. During its first 4 years of operation, the Police Corps program failed to fill most of the available participant slots. As shown in table 1, as of September 30, 1999, 430 (or approximately 43 percent) of the approved 1,007 participant positions had been filled. According to federal and state officials, two of the factors that contributed to this slow start were that (1) COPS dedicated insufficient staff to implement the program, which resulted in delays in providing program guidance and backlogs in processing program applications and reimbursements and (2) the Police Corps statute did not provide funding for states’ administrative or recruiting costs, which slowed program growth in some states and led several states to decline to participate in the program. In addition, statutory language led COPS to operate the Police Corps as a direct reimbursement program, which in turn made it difficult for Congress to determine the status of program funds. The Police Corps statute was enacted in 1994, and funds were specifically appropriated for the program in fiscal year 1996, when Congress provided $10 million. COPS hired a program director for the Police Corps in September 1996. In January 1997, COPS hired a program specialist to (1) receive and process student applications and service agreements; (2) develop standardized forms for student participant applications and requests for reimbursement from participants and institutions; (3) receive, record, and review requests for reimbursements; and (4) respond to inquiries from states and the general public. State officials said that the lack of COPS office staff led to delays in providing formal program guidance. According to state officials, COPS did not provide program guidance for recruiting and selecting participants until May 1997. Several state officials said that their attempts to get directions from COPS in writing or by telephone had failed. Similarly, state officials complained about backlogs in reviewing funding applications, conducting state budget reviews, and processing requests for reimbursable payments. For example, officials in all four states that we visited said that their programs experienced significant delays in receiving reimbursement from COPS for training expenditures. In an effort to secure more staffing for the program, in March 1998, COPS notified the House Committee on Appropriations of a proposed reprogramming action that would allow for an increase in staffing for the Office of the Police Corps. In April 1998, the Committee approved this proposed action. As a result COPS dedicated three full-time positions to the Police Corps to supplement the two COPS staff who were already performing Police Corps duties on a full-time basis. COPS officials said that the reason they did not devote more staff to the Police Corps program is that they interpreted their legal authority as not authorizing the payment of federal program administration costs with Police Corps funds. The Department of Justice has not provided us with the legal analysis underlying this position. As a result of this interpretation, COPS determined that it had to pay such costs from COPS operating funds. COPS officials said that, while they made an effort to provide staffing to the Police Corps program, their options were limited because the entire COPS Office was understaffed. COPS officials acknowledged that Police Corps program delays resulted in part from this understaffing. The Police Corps statute states, “There is established in the Department of Justice, under the general authority of the Attorney General, an Office of the Police Corps and Law Enforcement Education,” and the statute lays out the responsibilities of the Office. Although the Police Corps statute is silent regarding the payment of federal administrative costs, we believe that options were available to the COPS office for the payment of these costs. In our view, the COPS office could have charged the Police Corps line-item appropriations for fiscal years 1996 through 1998 to pay for these costs. A primary statute dealing with the use of appropriated funds, 31 U.S.C. 1301(a), provides that “Appropriations shall be applied only to the objects for which the appropriations were made except as otherwise provided by law.” However, it does not require, nor would it be reasonably possible, that every item of expenditure be specified in an appropriation act. The spending agency has reasonable discretion in determining how to carry out the objects of the appropriation. This concept is known as the “necessary expense” doctrine. For an expenditure to be justified under the necessary expense doctrine, three tests must be met: (1) the expenditure must bear a logical relationship to the appropriation to be charged; (2) the expenditure must not be prohibited by law; and (3) the expenditure cannot be authorized if it is otherwise provided for under a more specific appropriation or statutory funding mechanism. Under the first test, the key determination is the extent to which the proposed expenditure will contribute to accomplishing the purposes of the appropriation the agency wishes to charge. Clearly, any administrative costs incurred by COPS in implementing the Police Corps program should contribute to accomplishing the purposes of that program. Concerning the second and third tests, the payment of federal administrative costs is not prohibited by law, nor were federal administrative costs otherwise provided for under a more specific appropriation. Thus, the COPS office could have paid these administrative costs from the Police Corps’ line item appropriations. According to COPS officials, the Police Corps statute did not allow for federal reimbursement of states’ administrative or recruiting costs. State officials told us that this lack of reimbursement was the primary reason for slow progress in their programs. Under the Police Corps, a state’s designated lead agency is responsible for administering the Police Corps program in that state. The lead agency is obligated to provide overall program management, which includes developing and monitoring the state plan as well as the outreach, selection, and placement of the participants. COPS and state officials said that the lack of administrative and recruiting funds made it difficult for the state lead agencies to meet all of the statutory and policy requirements of the program. Officials in a few states said they discussed withdrawing from the Police Corps program for this reason; however, they did not do so. Officials in the four states that we visited told us that the lack of administrative and recruiting funds slowed the progress of their programs. For example, officials in both Maryland and Oregon indicated that the most serious problem they faced was lack of money for recruitment. Officials in 15 of the 19 participating states in our telephone survey said that the lack of administrative cost reimbursement was a major or very major reason for slow progress in their programs. Also, officials in 8 of the 12 nonparticipating states we contacted said that the lack of administrative cost reimbursement was a primary reason for their decision not to participate in the program. COPS officials said that they were concerned about this shortcoming of the program and made attempts to address it. In each of its three annual reports to the President, the Attorney General, and Congress, the Office of the Police Corps pointed out the need for state recruiting funds for the Police Corps program. In its April 1998 annual report, for example, the Office of the Police Corps at COPS noted that many participating states were working with limited resources and that some states were hesitant to apply to the Police Corps program because of the lack of reimbursement for expenses associated with outreach and selection. Similarly, in its April 1999 annual report, the Office of the Police Corps at OJP noted that it would be helpful if states could submit budgets and receive payment for expenses directly associated with recruitment and selection. Under COPS, the Police Corps program was operated as a direct reimbursement program. That is, program payments were made directly to an educational institution, in-service Police Corps officer, approved training provider, or participating law enforcement agency, rather than first being obligated to a state agency for subsequent disbursement. According to DOJ’s Associate Attorney General, COPS based its decision to operate the Police Corps program as a direct reimbursement on the language in the provisions of the statute itself. For example, the statute required the Director to “make scholarship payments . . . directly to the institution of higher education that the student is attending.” According to COPS officials, this resulted in large amounts of unobligated funds being carried over from one fiscal year to the next in each of the first 3 years of the program. As of March 1998, when the appropriations hearings for COPS fiscal year 1999 budget request were held, $57.8 million of the $60 million appropriated for the first 3 years remained unobligated. Under direct reimbursement, funds were not considered obligated when state plans were approved. Instead, COPS considered funds obligated only when an individual check had been sent to a participating college or university, in-service Police Corps officer, approved training provider, or police department. While COPS had committed $57.4 million of the $60 million in remaining funds, the funds were not obligated and thus were still available during annual appropriations. This caused concern during the appropriation hearings on COPS’ budget for the Police Corps. Upon assuming responsibility for the Police Corps program in December 1998, OJP increased the Police Corps staff from five to seven positions with the intention of allowing faster processing of applications and response to participants’ questions. In addition, OJP used its authority under 42 U.S.C. 3788(b) to begin establishing interagency agreements with the lead agencies in participating states. These agreements have enabled OJP to (1) obligate Police Corps’ funds at a much faster rate than COPS and (2) begin to make a formula-based payment that may be used to, among other things, help defray states’ administrative and recruiting costs. While these agreements should help, OJP continues to hold to the view, expressed in its 1999 annual report to Congress, that it would be helpful if states could submit budgets and receive payment for expenses directly associated with recruitment and selection. Once a state plan was approved by OJP, the state was to submit a budget to cover estimated payments to participants, colleges or universities, approved training providers, and police departments during the upcoming fiscal year. The interagency agreement contractually allowed for transfer of these funds, along with the formula-based payment, from OJP to the state lead agency once the budget had been approved. Funds were to be obligated at the time an agreement was signed. The interagency agreements obligated money that was committed but unobligated in the previous years under COPS, as well as money from the 1998 and 1999 appropriations. As of September 30, 1999, OJP had signed interagency agreements with 16 states. As shown in table 2, COPS obligated $7.6 million of the $90 million appropriated for the Police Corps program in fiscal years 1996 through 1999. OJP was reimbursed for the remaining $82.4 million in unobligated funds beginning in December 1998. As of September 30, 1999, OJP had obligated $51.3 million of these available funds, which left $31.1 million still unobligated. As a part of its interagency agreements with state lead agencies, OJP has begun to make formula-based payments to state lead agencies that can be used to help defray their administrative and recruiting costs. OJP is doing this under the authority of 42 U.S.C. 3788(b), which allows it to enter into interagency agreements with states on a reimbursable basis. Because 42 U.S.C. 3788(b) did not apply to the COPS office, this method of making reimbursements was not available to COPS. Under these interagency agreements, the state lead agencies are to assume primary responsibility for approving and paying Police Corps program expenditures. Under COPS, implementation of the Police Corps program got off to a slower than expected start, and the majority of participant slots remained unfilled. This state of affairs was due to a variety of causes, some of which stemmed from COPS failure to provide federal administrative funds and adequate staffing for the program, and others—such as the fact that the Police Corps statute did not provide funding for states’ administrative and recruiting costs—that were out of its control. COPS transferred the Office of the Police Corps to OJP in December 1998. While OJP has made significant progress in obligating funds and establishing interagency agreements with the participating states, it is too soon to tell whether OJP will succeed in increasing the number of participant slots filled and continue to provide guidance. We provided a draft of this report to the Attorney General for comment. DOJ responded that it had no official comment. However, we met with representatives of the COPS Office and OJP, who provided technical comments on the draft. We incorporated their technical comments where appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 10 days from the date of this report. At that time we will send copies of this report to the Honorable Ernest F. Hollings, Ranking Minority Member, Senate Subcommittee on Commerce, Justice, State, the Judiciary, and Related Agencies; and the Honorable Strom Thurmond, Chairman, and the Honorable Charles Schumer, Ranking Minority Member, Senate Judiciary Subcommittee on Criminal Justice Oversight. We will also send copies to the Honorable Harold Rogers, Chairman, and the Honorable Jose E. Serrano, Ranking Minority Member, House Appropriations Subcommittee on Commerce, Justice, State, the Judiciary, and Related Agencies; the Honorable Bill McCollum, Chairman, and the Honorable Robert C. Scott, Ranking Minority Member, House Judiciary Subcommittee on Crime; and the Honorable Janet Reno, Attorney General. We will make copies available to others upon request. If you or your staff have any questions concerning this report, please contact me or Weldon McPhail on (202) 512-8777. Major contributors to this report are acknowledged in appendix V. The Police Corps Act provides funding for basic law enforcement training that is to go well beyond the “minimum standards” training available to police officers in many states. The philosophy of Police Corps training is that to serve effectively on the beat in some of America’s most challenged communities, Police Corps officers must have a solid background in traditional law enforcement, strong analytical abilities, highly developed judgment, and skill in working effectively with citizens of all backgrounds. Police Corps training is to emphasize ethics, community and peer leadership, honesty, self-discipline, physical strength and agility, and weaponless tactics—tactics to protect both officer and citizen in the event of confrontation. This philosophy is reinforced through a statutory requirement that Police Corps participants receive a minimum of 16 weeks of basic law enforcement training either prior to or following college graduation. This was being carried out or planned in all of the participating states. In 1998, the Police Corps Act was amended to give states the option of providing an additional 8 weeks of federally funded Police Corps training. While not specifically required by statute, the Guidelines for Training issued by the Office of the Police Corps require participating states to provide law enforcement training in a residential, live-in facility. All of the participating states required or planned to require such training. However, officials in 6 of the 19 states we surveyed indicated that the requirement that training be conducted on a live-in basis, rather than in an 8-hours-per- day nonresidential facility, was a major reason for the slow progress of their Police Corps programs, as they did not have facilities readily available for this purpose. Nine of the 19 participating states in our telephone survey indicated that their Police Corps training preference would be nonresidential or a combination of both residential and nonresidential. The Office of the Police Corps provides financial assistance to state and local law enforcement agencies as an incentive to employ Police Corps participants. Law enforcement agencies that employ Police Corps officers are to receive $10,000 per participant for each year of required service, or $40,000 for each participant who fulfills the 4-year service obligation. As of September 30, 1999, 163 Police Corps participants had completed their degrees and training and were serving in police agencies in 7 states— Kentucky, Maryland, Mississippi, Missouri, North Carolina, Oregon, and South Carolina. As of this same date, state and local police departments with Police Corps officers on the beat had received $960,000 in assistance. The Police Corps statute did not place any restrictions on how police departments could use this provided assistance. As a result, the police departments we contacted were using these funds for various purposes. Officials in one police department, for example, said they used the assistance money to cover the expenses of recruiting and selecting officers. Another police department used the funds to employ 10 additional police officers. Officials in one state said they placed assistance money in the general funds to pay police officers’ salaries. Table 3 shows Police Corps law enforcement payments to the states that had received payment at the time of our review and how these states used the provided funds. The Police Corps program offers college scholarships to dependent children of police officers killed in the line of duty after the date a participating state joins the program. An eligible dependent may receive up to $30,000 for undergraduate study at any accredited institution of higher education in the United States. Dependent children in this category incur no service or repayment obligation. The application process is noncompetitive. For fiscal years 1996 and 1997, the Office of the Police Corps budgeted sufficient funds to provide 68 scholarships. As of September 30, 1999, 26 of these scholarship positions remained unfilled. According to Police Corps officials, the program was making a strong effort to identify and inform qualified persons about the availability of these scholarships. State Maryland Has participated in the Police Corps Lead agency: The Governor’s Office on Crime Control and Prevention. Program funding and accomplishments As of September 30,1999, the Maryland Police Corps program had been approved for $10.2 million in funding and 140 participant positions. Seventy-eight of these positions had been filled as of that date. The fiscal year 2000 OJP Interagency Agreement with Maryland authorizes 30 additional participant positions and approximately $4.3 million for costs associated with the 170 participant positions approved to date. Other participants include the Baltimore Police Department (BPD) and the University of Maryland’s Shriver Center, which manages program training. The Police Corps program is seen as a vehicle for broad-based improvements in Maryland policing. The BPD had received $280,000 in assistance payments, which it used to pay the salaries of the 28 Police Corps graduates it had hired. An additional 24 officers had not served long enough for BPD to be eligible for assistance payments. As of September 30,1999, six dependent children of officers killed in the line of duty had received $84,584 in scholarships. Program limitations According to Maryland officials, the lack of reimbursement for administrative and recruitment costs limited the program’s ability to fill participant positions. Operation of the program on a reimbursable basis required detailed voucher support, which increased both the state’s unfunded administrative burden and the administrative burden at the COPS office, which was understaffed. The resulting delays in reimbursement resulted in loss of interest income by the state for the up-front funding of training expenditures. At the beginning of the program, Maryland assumed the task of developing a Police Corps model-training program. The contractor, Science Applications International Corporation, failed to produce a curriculum acceptable to the Office of the Police Corps at COPS. This resulted in COPS’ deferral of approval of Maryland’s 1997 request for 240 additional participant positions and postponement of its scheduled training. Background Has participated in the Police Corps program since 1996. Lead agency: The Oregon State Police Criminal Justice Services Division. Program funding and accomplishments As of September 30, 1999, Oregon’s Police Corps program had been approved for $5.1 million in funding and 80 participant positions. Sixty-nine positions had been filled as of September 30,1999. The fiscal year 2000 OJP Interagency Agreement with Oregon authorizes 100 additional positions and approximately $2.8 million for costs associated with the 180 participant positions approved to date. Other participants include the Oregon Board on Public Safety Standards and Training and the Portland Police Bureau. Program limitations Oregon officials attributed slow program progress to the lack of a formal contractual agreement between COPS and the state, the lack of reimbursement for administrative and recruitment costs, and delays in reimbursement of training- related expenses. The Police Corps program is seen as a way to reduce juvenile gang violence through community policing. The Portland Police Bureau had received $380,000 for employing 38 Police Corps graduates as of that date. Financial support from the Oregon Department of State Police ($50,000) and the Portland Police Bureau ($385,000) enabled Oregon’s Police Corps program to overcome the lack of reimbursement for administrative and recruitment costs. First participated in the program in 1998. (The Florida Department of Law Enforcement, which initially considered the program, declined to participate in 1996 and 1997 due to the lack of reimbursement of administrative costs, the limiting of the police service requirement to 4 years, and the limited number of training slots, among other reasons.) As of September 30, 1999, Oregon provided two dependent children of officers killed in the line of duty with $41,086 in scholarships. As of September 30, 1999, Florida’s Police Corps program had been approved for $2.1 million in funding and 30 participant positions. The fiscal year 2000 OJP Interagency Agreement with Florida authorizes 30 additional participant positions and approximately $3.0 million for costs associated with the 60 positions approved to date. Lead agency: Florida State University’s (FSU) School of Criminology and Criminal Justice. In its 1998 plan, Florida indicated its first 30 recruits would start community patrol in May/June 1999. However, various problems (see Limitations) have pushed back Florida’s Police Corps program, and as of December 1999, a program official indicated that 15 to 20 college graduates were expected to attend Florida’ s first training session, scheduled for March 2000. Other participants include the Duval and Hillborough County Sheriffs Departments and the Tampa and Tallahassee Police Departments. According to Florida program officials, the lack of agreement between Florida and COPS on reimbursement of administrative and recruitment costs resulted in many of the 30 participant positions authorized in the 1998 plan remaining unfilled and postponement of planned training sessions. The FSU Contracts and Grants Department did not believe COPS’ approval of its plans was sufficiently authoritative to establish a funded cost account for the Police Corps program. To overcome the lack of administrative and recruitment cost reimbursement, FSU was able to obtain $50,000 from the Florida Department of Law Enforcement to establish a Police Corps account in the FSU Contracts and Grants Department and start recruitment and curriculum development. The objectives of the Florida Police Corps program are to (1) recruit college graduates of exceptional promise into the Police Corps, (2) provide an exemplary program of training, and (3) broaden the state’s commitment to community policing. As of September 30, 1999, Florida had not awarded any scholarships to children of officers killed in the line of duty. Background Texas has participated in the Police Corps program since 1997. Program funding and accomplishments As of September 30, 1999, the Texas Police Corps program had been approved for $3.3 million in funding and 60 participant positions, 44 of which had been filled. Six participants had received their degrees but had yet to be trained. Lead agency: Texas Commission on Law Enforcement Officer Standards and Education. The state has responsibility for curriculum and training in 105 licensed academies. The commission is also responsible for Police Corps program administration. As of September 30, 1999, two dependent children of officers killed in the line of duty had received $34,569 in scholarships. The Police Corps program is seen as a way to address the state legislature’s concerns about the need for more and better trained officers in small, rural, geographically remote law enforcement agencies. Program limitations According to Texas officials, state Police Corps program limitations included lack of administrative funding, inadequate procedures for handling student vouchers, lack of a standardized training curriculum, and inexperienced staff. According to Texas officials, as of December 1999, Texas had yet to conduct any training due to the lack of a standard Police Corps training curriculum and the Police Corps residential training requirement. One graduate is slated to attend training in Mississippi while Texas is in the process of establishing its own training academy. As of December 1999, several participants had withdrawn from the program because of training delays. Following is an example of the questionnaire for participating states. Interviews were conducted by telephone. Hello. My name is __________ and I’m with the U.S. General Accounting Office (GAO), the investigative agency of the U.S. Congress. I’m calling to speak with ______________________, whose name was provided by the Department of Justice as a point of contact for your state’s Police Corps Program. Initial Point of Contact: Provide the following information about the initial point of contact. Lead Agency: School of Criminology and Criminal Justice FSU Police Corps Web site: _ Provide the following information about the alternate point of contact. When you have the right person on the phone, proceed with. Hello. My name is ___________, and I’m with the U.S. General Accounting Office (GAO), the investigative agency of the U.S. Congress. We are conducting a study of the Police Corps Program, which was part of the Violent Crime Control and Law Enforcement Act of 1994. Senator Judd Gregg, Chairman of the Subcommittee on Commerce, Justice, State, the Judiciary and Related Agencies requested this study. The Chairman is most interested in knowing how the Department of Justice (DOJ) has managed program funds. Specifically, the subcommittee is concerned about how funds were obligated during the first 3 years of the program. We were also asked to review the program areas of training, assistance to law enforcement agencies, scholarships to dependent children, and student education. Are you the person I should interview? ( If not, obtain alternate interviewee information and provide above.) A. I’d like to conduct a structured interview with you that should take about 20 minutes. Do you have time to speak with me now? Yes ( ) No ( ) B. When would be a good time for me to call you back? Date and time: ___________________________________ 1. In what year did your state first apply for participation in the Police Corps Program? 2. When was your state plan first approved? Date (mo. and yr.) 3. Did your state conduct a feasibility study or any other analysis for participating in the Police Corps Program? Don’t Know…………………………………...3 4. Request a copy of the feasibility study (and/or other supporting data that is available) be sent to: U.S. General Accounting Office Suite 1010 World Trade Center 350 South Figueroa Street Los Angeles, CA 90071 5. Was your first plan approved in full or was approval conditional? Full approval ……………6= 32% Conditional approval ….13 = 68% 6. In what areas did DOJ impose conditions? 7. Did the changes required of your plan by DOJ delay the start of your program? If yes, how long in months? 8. I am going to read to you a list of reasons why states may not have made faster progress in the start-up of their Police Corps program. For each reason I read, please indicate whether it was a very major reason, a major reason, a minor reason, or not a reason at all. (Comments provided below.) 9. Did your state Police Corps program experience delay by DOJ in any of the following areas? If yes, to any area please provide comment(s) and also send any available supporting documentation to Marco Gomez (see question 4 above). 10. Also, if “yes,” did any of the delays cause adverse impact to your state’s Police Corps program? If yes, please explain: 11. Is your state’s Police Corps training residential, nonresidential, or a combination of both? Nonresidential……………… .……………….…( ) Combination of residential and nonresidential.. 2 12. Does DOJ require residential training? Cont. with qst. 13. No……….……………………..….. 1 Skip to qst. 15. Don’t know……………………...…2 Skip to qst. 15. 13. If “yes,” does your state agree with the emphasis on residential training? 14. What is your state’s training preference, residential or nonresidential? Combination residential and nonresidential…... 7 Don’t know……………………………………….. 1 15. Does Police Corps training cover your state’s POST requirements? Don’t know………………………………….…( ) 16. If not, is additional training required for your state’s Police Corps graduates? Not applicable …………………. .16 17. In which of the following ways does your state promote the Police Corps program? ( Read options, and check all that apply. ) Job fairs …………………………………...11 Campus recruitment ……………………. ..8 Other(s) …………………………………….7 List other(s) Recruitment is continuous, on-going 18. Does your state conduct outreach to children of officers killed in the line of duty? Yes……………………………….16 Cont. with qst. 19 No………..……………………….. .2 Skip to qst. 20 Don’t know………………………...0 Skip to qst. 20 Not applicable…………………….1 19. Does your state do outreach to dependent children through: (Read options) General state wide publicity ……………..…0 Please explain how your state meets the requirement to recruit minorities and women? Do you have any other comment about the program you care to share with us? Thank you very much for your help, good-bye. Following is an example of the questionnaire for nonparticipating states. Interviews were conducted by telephone. Hello. My name is ______________, and I’m with the U.S. General Accounting Office, the investigative agency of the U.S. Congress. At the request of Congress, we are conducting a study of the Department of Justice Police Corps Program that was included as part of the Violent Crime Control and Law Enforcement Act of 1994. I would like to speak with a representative of name state who could answer questions about the Department of Justice’s outreach to name state and the reasons name state is not participating in the program. Are you the right person to speak with? (If not, determine who is. ) A. I’d like to conduct a structured interview with you that should take about 10 minutes. Do you have time to speak with me now? Yes…………………………( ) Go to question 1 No………………………….( ) B. When would be a good time for me to call back? Enter the following information about the interviewee. 1. I am going to read to you a list of reasons why states may not participate in the Police Corps program. For each reason I read, please indicate whether it was a very major reason, a major reason, a minor reason, or not a reason at all for why your state decided not to participate in the program. 2. Did name state prepare a feasibility study for participating in the Police Corps Program? Yes…………………………. ………..(4) No…………………………………….(7) Don’t know…………………………..(1) If “yes” in question 2, read: 3. Are there data available, other than the feasibility study, in support of the reasons cited above? Yes……………………(0) next page No……………………(12) If yes, request that a copy of the feasibility study (and/or other supporting data that is available) be sent to: Marco F. Gomez USGAO Suite 1010 World Trade Center 350 Figueroa St. Los Angeles, Calif. 90071 OR faxed to 213-830-1180 Ask if there are any other comments about the Police Corps program you care to share with us: _________________________________________________________________ _____________ Thank you very much for your help. In addition to those named above, James Moses, Marco Gomez, Jan Montgomery, Nancy Finley, and Michael Little made key contributions to this report. Ordering Copies of GAO Reports The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch- tone phone. A recorded menu will provide information on how to obtain these lists. Viewing GAO Reports on the Internet For information on how to access GAO reports on the INTERNET, send e-mail message with “info” in the body to: or visit GAO’s World Wide Web Home Page at: Reporting Fraud, Waste, and Abuse in Federal Programs To contact GAO FraudNET use: Web site: http://www.gao.gov/fraudnet/fraudnet.htm E-Mail: [email protected] Telephone: 1-800-424-5454 (automated answering system) | Pursuant to a congressional request, GAO reviewed the Department of Justice's (DOJ) implementation of the Police Corps program under the Community Oriented Policing Services (COPS) office and, more recently, the Office of Justice Programs. GAO noted that: (1) the Police Corps program got off to a slower than expected start resulting in the majority of participant slots remaining unfilled; (2) as of September 30, 1999, 433 of the 1,007 participant positions funded for fiscal years 1996 through 1998 had been filled; (3) according to federal and state officials, two of the factors that contributed to this slow start were as follows: (a) COPS dedicated insufficient staff to the Police Corps program, which led to delays in providing program guidance, processing program applications and payments, and answering participants' questions about the program; and (b) the Police Corps statute did not provide funding to pay states' costs for program administration or for recruitment and selection of program participants; (4) COPS operation of the Police Corps as a direct reimbursement program made determining program status difficult, as it slowed the rate at which funds were obligated; (5) according to a DOJ official, COPS based its decision to operate the Police Corps program as a direct reimbursement program on the language of the statute; (6) under direct reimbursement, funds were not considered obligated when state plans were approved; (7) instead, COPS considered funds obligated only when an individual check had been sent to a college or university, in-service Police Corps officer, approved law enforcement training provider, or participating police department; (8) on December 10, 1998, responsibility for the Police Corps program was transferred from COPS to OJP; (9) OJP devoted seven full-time staff positions to process program applications and payments and respond to participant queries faster; (10) under the authority granted OJP under 42 U.S.C. 3788(b), which allowed OJP to enter into interagency agreements with states on a reimbursable basis, OJP opted, through the use of such agreements, to make a formula payment that can be used to help defray states' recruiting and administrative costs; (11) this authority was not available to COPS; (12) while these interagency agreements only recently went into effect, they should make money more readily available to states trying to implement their Police Corps programs; (13) as of September 30, 1999, OJP had obligated $51.3 million of the $82.4 million available to the program; and (14) it is too early to determine the effects of the transfer of the Police Corps program from COPS to OJP on the factors contributing to the slow start. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Nanotechnology is generally defined as the ability to understand and control matter at the nanoscale (between 1 and 100 nanometers), in order to create materials, devices, and systems with fundamentally new properties and functions specific to that scale. For example, opaque materials, such as copper, become transparent at the nanoscale and inert materials, such as platinum and gold, become chemical catalysts. With the capacity to control and manipulate matter at this scale, nanotechnology promises advances in areas such as new drug delivery systems, more resilient materials and fabrics, stronger materials at a fraction of the weight, more efficient energy conversion, and dramatically faster computer chips. To guide federal development of this technology, the National Nanotechnology Initiative (NNI) was established in fiscal year 2001 to support long-term research and development aimed at accelerating the discovery, development, and deployment of nanoscale science, engineering, and technology. The NNI is a multiagency program involving nanotechnology-related activities of the 25 federal agencies currently participating, including the National Science Foundation (NSF), the Department of Defense, the Department of Energy, the National Institutes of Health (NIH), and the National Institute of Standards and Technology (NIST). See table 1 for a complete listing of federal agencies participating in the NNI as of December 2007. Federal support for nanotechnology research totaled about $1.3 billion in fiscal year 2006. Cumulatively through fiscal year 2006, federal agencies have devoted over $5 billion to nanotechnology research since the NNI’s inception. While not all of the NNI’s participating agencies conduct or sponsor research, in fiscal year 2006, 13 agencies had budgets dedicated to nanotechnology research and development. Eight of these 13 agencies devoted some of their research resources to studying the environmental, health, and safety (EHS) risks of nanotechnology. Of these eight agencies, five—EPA, NIH, NIOSH, NIST, and NSF—accounted for almost 96 percent of the research focused on EHS risks in fiscal year 2006. NSF alone accounted for about 56 percent of all federal EHS risk research in fiscal year 2006. See figure 1 for a break out of research funds used by agency. A number of research and regulatory agencies support research to advance knowledge and information about the potential EHS risks of nanotechnology: The National Institute for Occupational Health and Safety (NIOSH) is a research agency within the Department of Health and Human Services (HHS) that concentrates its research on topics related to human health. NIOSH’s research results in recommendations for preventing work-related injuries, illnesses, and death. It therefore focuses on studies that will improve scientists’ ability to identify potential adverse occupational health effects of nanomaterials. At NIH, another HHS research agency that concentrates on human health, nanotechnology research is generally focused on the development of medical applications and the protection of public health, including research to examine the interaction of nanomaterials with biological systems. Consistent with its mission to advance measurement science, standards, and technology to enhance economic security and improve our quality of life, the National Institute of Standards and Technology (NIST), an agency in the Department of Commerce, develops the measurement techniques required to better characterize potential impacts of nanotechnology. The National Science Foundation (NSF) has the broadest research portfolio relative to nanotechnology and supports research to help meet its mission to promote the progress of science and engineering. With regard to EHS risks, NSF sponsors research to develop new methods to characterize nanoparticles and investigate the environmental implications and toxicity of nanomaterials. In addition, NSF sponsors a network of research centers that focus on a range of EHS issues including occupational safety during nanomanufacturing and the interaction of nanomaterials and cells. In addition to these research agencies, a number of regulatory agencies also have an interest in developing information about the potential EHS risks of nanotechnology: The Environmental Protection Agency (EPA), which is both a research and regulatory agency, is tasked with protecting human health and the environment. As a result, EPA determined that it needed to develop a better understanding of the potential human health and environmental risks from exposure to nanoscale materials and is therefore focusing its research efforts in this area, among others. The Food and Drug Administration (FDA), another HHS agency, is generally responsible for overseeing the safety and effectiveness of drugs and devices for humans and animals, and of biological products for humans. The agency also is generally responsible for overseeing the safety of color additives, cosmetics, and foods, including food additives and dietary supplements. As a result, FDA is interested in understanding the potential risks posed by nanomaterials used in products under its jurisdiction. The Occupation Safety and Health Administration (OSHA) is a Department of Labor agency whose mission is, in part, to ensure the safety and health of workers by setting and enforcing standards and encouraging continual improvement in workplace safety and health. OSHA is interested in information that would aid in the application of existing health standards—including hazard communication, respiratory protection programs, and laboratory standards—to nanotechnology operations and help determine the need for new standards or guidance products. The mission of the U.S. Consumer Product Safety Commission (CPSC) is to protect the public from unreasonable risks of serious injury or death from more than 15,000 types of consumer products, including some that may be manufactured with nanomaterials. The NNI is managed within the framework of the National Science and Technology Council’s (NSTC) Committee on Technology. The NSTC is an organization through which the President coordinates science and technology policies across the federal government. The NSTC is managed by the Director of the Office of Science and Technology Policy (OSTP), who also serves as the Science Advisor to the President. The NSTC’s Committee on Technology established the Nanoscale Science, Engineering, and Technology (NSET) subcommittee to coordinate communication between the federal government’s multiagency nanoscale research and development programs. The NSET subcommittee is composed of representatives from any agencies that choose to participate in the NNI (as of January 2008, 25 agencies are involved) and serves as the primary interagency coordination mechanism for nanotechnology-related research. Supporting the NSET subcommittee, the National Nanotechnology Coordinating Office (NNCO) provides day-to-day technical guidance and administrative assistance to prepare multiagency planning, budget, and assessment documents. In addition, the NSET subcommittee has established a number of working groups to help better focus interagency attention and activity on specific issues, such as the Nanotechnology Environmental and Health Implications (NEHI) working group. This group was designed to provide for exchange of information among participating agencies; facilitate the identification, prioritization, and implementation of research; and promote communication to other federal and nonfederal entities. The NEHI working group also coordinates U.S. participation in international activities, including the programs of the Organisation for Economic Co-operation and Development. Currently, NEHI membership consists of 16 research and regulatory agencies. See figure 2 for the NNI’s structure. Under the NNI, each agency funds research and development projects that support its own mission as well as the NNI’s goals. While agencies share information on their nanotechnology-related research goals with the NSET subcommittee and NEHI working group, each agency retains control over its decisions on the specific projects to fund. While the NNI was designed to facilitate intergovernmental cooperation and identify goals and priorities for nanotechnology research, it is not a research program. It has no funding or authority to dictate the nanotechnology research agenda for participating agencies. The NNI used its fiscal year 2000 strategic plan and its subsequent updates to delineate a strategy to support long-term nanoscale research and development, among other things. A key component of the 2000 plan was the identification of nine specific research and development areas— known as “grand challenges”—that highlighted federal research on applications of nanotechnology with the potential to realize significant economic, governmental, and societal benefits. Examples of potential breakthroughs cited in this strategic plan included developing materials that are 10 times stronger, but significantly lighter, than steel to make vehicles lighter and more fuel efficient; improving the speed and efficiency of computer transistors and memory chips by factors of millions; and developing methods to detect cancerous tumors that are only a few cells in size using nanoengineered contrast agents. In 2004, the NNI updated its strategic plan and described its goals as well as the investment strategy by which those goals were to be achieved. Consistent with the 21st Century Nanotechnology Research and Development Act, the NNI established major subject categories of research and development investment, called program component areas (PCA), that cut across the interests and needs of the participating agencies. These seven areas replaced the nine grand challenges and other nanotechnology investment areas that the agencies had previously used to categorize their nanotechnology research. Six of the seven areas are focused on the discovery, development, and deployment of nanotechnology. The seventh, societal dimensions, consists of two subareas—research on environmental, health, and safety; and education and research on ethical, legal, and other societal aspects of nanotechnology. The EHS portion of the societal dimensions PCA accounted for over $37 million in fiscal year 2006. See figure 3 for a break out of research funds used, by PCA. PCAs are intended to provide a means by which the NSET subcommittee, OSTP, the Office of Management and Budget (OMB), Congress, and others may be informed of the relative federal investment in these key areas. PCAs also provide a structure by which the agencies that fund research and development can better direct and coordinate their activities. In response to increased concerns about the potential EHS risks of nanotechnology, in fiscal year 2005, the NSET subcommittee and the agencies agreed to separately report their research funding for each of the two components of the societal dimensions PCA. The December 2007 update of the NNI’s strategic plan reaffirmed the program’s goals, identified steps to accomplish those goals, and formally divided the societal dimensions PCA into two separate PCAs—”environment, health, and safety” and “education and societal dimensions.” Beginning with the development of the fiscal year 2005 federal budget, agencies have worked with OMB to identify funding for nanoscale research that would be reflected in the NNI’s annual Supplement to the President’s Budget. Specifically, OMB issued guidance that consisted of a definition of nanoscale research and a notice that OMB would work with agencies to identify data for each of the PCAs. OMB analysts reviewed aggregated, rather than project-level, data on research funding for each PCA to help ensure consistent reporting across the agencies. Agencies also relied on definitions of the specific PCAs developed by the NSET subcommittee to determine the appropriate area in which to report research funding. Neither NSET nor OMB provided guidance on whether or how to apportion funding for a single research project to more than one PCA, if appropriate. However, representatives from both NSET and OMB stressed that the agencies were not to report each research dollar more than once. Although the NNI reported that federal agencies in fiscal year 2006 devoted $37.7 million—or about 3 percent of the total of all nanotechnology research funding—to research that primarily focused on studying the EHS risks of nanotechnology, we found that about 18 percent of the EHS research reported by the NNI cannot actually be attributed to this purpose. This was largely due to a reporting structure that did not lend itself to categorizing particular types of projects and limited guidance provided to the agencies by the NNI on how to consistently report EHS research. In addition to research reported as being primarily focused on the EHS risks of nanotechnology, some agencies conduct research that is not reflected in the EHS totals provided by the NNI either because they are not considered federal research agencies or because the primary purpose of the research was not to study EHS risks. Overall, 3 percent—or $37.7 million—of the approximately $1.3 billion dedicated for nanotechnology research funding in fiscal year 2006 was reported as being devoted to studying the EHS risks of nanotechnology. Our review of data on agency funding for 119 projects that were underway in fiscal year 2006 largely confirmed the figures reported by the NNI. Specifically, all but one of the five individual agencies reported the same or greater funding to us than what the NNI reported for fiscal year 2006. EPA reported slightly less to us than it did to the NNI. Largely these discrepancies resulted from timing differences in the date the NNI needed the data and the date agency officials finalized their review of fiscal year spending. For example, NIOSH reported $470,000 more to us because it had not included funding for a few projects in its report to the NNI, according to agency officials. Other differences resulted from rounding. As would be expected, our review of the descriptive information on EHS projects found that those agencies with missions directly related to protecting the environment or human health and safety devoted a greater percentage of their nanotechnology research budgets to studying EHS risks. For example, in fiscal year 2006, NIOSH reported devoting 100 percent of its fiscal year 2006 nanotechnology research funds to support 23 projects to study EHS risks. These projects focused primarily on worker safety and exposure, such as gathering data on workplace exposure to nanomaterials and evaluating the extent to which particle size affects the toxicity of inhaled nanomaterials. Similarly, EPA reported devoting 82 percent of its nanotechnology research budget to study EHS risks. This research included human health-focused projects to examine the toxicity of manufactured nanomaterials at the molecular and cellular level, as well as environmentally focused projects to evaluate how nanomaterials disperse and change under different environmental conditions and the extent to which nanomaterials accumulate in the bodies of various animal species. In contrast, we found that agencies with broader missions devoted a smaller portion of their nanotechnology research funds to study EHS issues. For example, NIST, an agency oriented toward measurement science and standards, dedicated 3 percent of its nanotechnology research budget to EHS risks in fiscal year 2006. The majority of its research funding focused on such PCAs as fundamental phenomena and processes; nanoscale devices and systems; and instrumentation research, metrology, and standards. Similarly, NSF dedicated 6 percent of its fiscal year 2006 nanotechnology research funds on research related to EHS risks as compared with 41 percent focused on fundamental phenomena and processes. In fiscal year 2008, funding for both EHS-related research and nanoscale research in general is projected to grow. Overall nanotechnology research is projected to increase in fiscal year 2008 to about $1.4 billion, or an increase of 20 percent over fiscal year 2005 figures. Funding for EHS- related research is expected to increase to approximately $59 million, an increase of 68 percent over fiscal year 2005 levels. As a result, EHS research would grow to about 4 percent of projected nanotechnology research in fiscal year 2008. About 18 percent of the total research dollars reported by the agencies as being primarily focused on the study of nanotechnology-related EHS risks in fiscal year 2006 cannot actually be attributed to this purpose. Specifically, our analysis found that 22 of the 119 projects funded by five federal agencies were not primarily related to studying EHS risks. These 22 projects accounted for about $7 million of the total that the NNI reported as supporting research primarily focused on EHS risks. Almost all of these projects—20 out of 22—were funded by NSF, with the two additional projects funded by NIOSH. See table 2 for our analysis of the nanotechnology research projects reported as being primarily focused on EHS risks. We found that the primary purpose of many of these 22 projects was to explore ways to use nanotechnology to remediate environmental damage or to identify environmental, chemical, or biological hazards. For example, a number of NSF projects explored the use of nanotechnology to improve water or gaseous filtration systems. In other cases, NSF-funded research was targeted toward developing nanotechnology-based applications to remediate soil or water contamination. In addition, many of the projects NSF reported as having a primary purpose to study EHS risks were part of its efforts to build a national research infrastructure capable of supporting a wide range of nanotechnology-related research. Specifically, NSF sponsors 16 Nanoscale Science and Engineering Centers, many of which devote a portion of their research efforts to EHS risk-related projects. In these cases, NSF apportioned a segment of the Center funding to the EHS category to account for this research. At NIOSH, both projects that we identified as not being primarily focused on studying EHS risks were focused on using nanotechnology to mitigate workplace risks, such as developing advanced sensors that incorporate nanotechnology to detect the presence of toxic gases in the workplace. We found that the miscategorization of these 22 projects resulted largely from a reporting structure for nanotechnology research that does not easily allow agencies to recognize projects that use nanotechnology to improve the environment or enhance the detection of environmental contaminants, and from the limited guidance available to the agencies on how to consistently report EHS research. From fiscal years 2001 to 2004, the NSET subcommittee categorized federal research and development activities into nine categories, known as “grand challenges,” that included one focused on “nanoscale processes for environmental improvement.” Agencies funded and researchers initiated work on many of these 22 projects under the grand challenges categorization scheme. Starting in fiscal year 2005, NSET adopted a new categorization scheme for agencies to report their nanotechnology research. The new scheme, which was based on PCAs, eliminated the environmental improvement applications research category. Instead, agencies were asked to fund and report research designed to address or understand the risks associated with nanotechnology, as part of the societal dimensions PCA. In essence, the new scheme shifted the focus from applications-oriented research to research focused on the EHS implications of nanotechnology. However, under the new scheme, agencies no longer had a way to categorize environmentally focused research that had been initiated. As a result, NSF and NIOSH characterized these projects as EHS focused for lack of a more closely related category to place them in, according to program managers. Furthermore, neither NSET nor OMB provided agencies guidance on to how to apportion the dollars for a single project to more than one program component area, when appropriate. This is especially significant for broad, multiphase research projects, such as NSF’s support to develop networks of research facilities with the capability to address a range of nanotechnology-related topics. Of the five agencies we reviewed, only NSF apportioned funds for a single project to more than one PCA. In addition to research reported to the NNI as being primarily focused on the EHS risks of nanotechnology, some agencies conduct research that is not reflected in the EHS totals provided by the NNI either because they are not considered federal research agencies or because the primary purpose of the research was not to study EHS risks. For example, FDA, which does not have a specific research budget and does not generally track nanotechnology research spending, used a portion of its operating funds in fiscal years 2004 through 2007 to undertake 15 research projects to evaluate the potential health risks of nanomaterials in the products that it regulates. One such project focused on sunscreens that contain nanosized particles of titanium dioxide to better understand their potential to be absorbed into the body through the skin. Another project is designed to study the toxicological and immunological responses to nanoparticles that may be used in therapeutic drugs. A fundamental understanding of potential risks will help FDA develop guidance and make future regulatory decisions regarding the manufacture and use of FDA-regulated products using these materials, according to program managers. In addition, as noted in the NNI’s annual Supplement to the President’s Budget, some agencies conduct research that results in information highly relevant to EHS risks but that was not primarily directed at understanding or addressing those risks and therefore is not captured in the EHS total. For example, NIH has research underway to develop drug delivery mechanisms that use nanotechnology. While the primary purpose of such research is to develop medical applications using nanotechnology, the research also provides information on how toxic the nanomaterials are, whether they accumulate in body tissues, and how they interact with the body at the cellular and molecular level. Agencies report funding data for such research in other PCAs, such as nanoscale devices and systems, rather than the EHS area. In addition, NIST conducts an array of nanotechnology research to accurately quantify the properties of nanomaterials and determine their size, shape, and chemical composition. This type of information is needed to understand and measure nanomaterials to ensure safe handling and protection against potential health or environmental hazards. However, NIST reports the funding data for such research under other PCAs such as instrumentation research, metrology, and standards. Ongoing agency and NEHI working group efforts to identify and prioritize needed research related to the potential EHS risks of nanotechnology appear reasonable but have not as yet resulted in a comprehensive research strategy to guide EHS research across agencies. We found that the EHS risk research undertaken in fiscal year 2006 addressed a range of EHS topics, was generally consistent with both agency- and NEHI- identified research priorities, and focused on the priority needs within each category to varying degrees. We determined that each agency’s nanotechnology research priorities generally reflect its mission. For example, the priorities identified by FDA and CPSC are largely focused on the detection and safety of nanoparticles in the commercial products they regulate. On the other hand, EHS research priorities identified by NSF reflect its broader mission to advance science in general, and include a more diverse range of priorities, such as the safety and transport of nanomaterials in the environment, and the safety of nanomaterials in the workplace. All eight agencies in our review have processes in place to identify and prioritize the research they need related to the potential EHS risks of nanotechnology. Most agencies have developed task forces or designated individuals to specifically consider nanotechnology issues and identify priorities, although the scope and exact purpose of these activities differ by agency. EPA, for example, formed a Nanomaterial Research Strategy Team to craft a long-term, focused plan to guide all of the agency’s nanotechnology research. The strategy, which identifies EPA’s research priorities around four key themes and seven scientific questions, is based in part on the agency’s 2007 “Nanotechnology White Paper” that described scientific issues the agency should consider to help ensure safe development of nanotechnology and to understand the potential risks. At other agencies, particularly those that have little or no funding for nanotechnology research, specific individuals throughout the agency have been tasked to identify and prioritize EHS research needs. For example, CPSC has assigned individual staff responsible for different aspects related to consumer product safety, such as health scientists, to monitor trends in the use of nanomaterials in such products, which helps inform the agency’s nanotechnology research priorities. Once identified, agencies communicate their EHS research priorities to the public and to the research community in a variety of ways, including publication in agency documents that specifically address nanotechnology issues, agency strategic plans or budget documents, agency Web sites, and presentations at public conferences or workshops. In addition to the efforts of individual agencies, the NSET subcommittee has engaged in an iterative prioritization process through its NEHI working group, although this process is not yet complete. First, in 2006, NEHI identified but did not prioritize five broad research categories and 75 more specific subcategories of needs where additional information was considered necessary to further evaluate the potential EHS risks of nanotechnology. The report identified these five general research categories as (1) Instrumentation, Metrology, and Analytical Methods; (2) Nanomaterials and Human Health; (3) Nanomaterials and the Environment; (4) Health and Environmental Exposure Assessment; and (5) Risk Management Methods. Second, following efforts to obtain public input on its 2006 report, NEHI released another report in August 2007, in which it distilled the previous list of 75 unprioritized specific research needs into a set of five prioritized needs for each of the five general research categories. The NEHI working group has used these initial steps to identify the gaps between the needs and priorities it has identified and the research that agencies have underway. According to agency and NNI officials, once this gap analysis is complete, NEHI will formulate a long-term, overarching EHS research strategy. According to the August 2007 report, the proposed strategy will list NEHI’s final research priorities, describe current federal EHS research, document the unmet needs, identify opportunities for interagency collaboration, and establish a process for periodic review. As envisioned, the EHS research strategy will serve as guidance for individual agencies as they develop their own research agendas and make funding decisions. NEHI plans to complete this overarching research strategy and issue a report in early 2008, according to NNI officials. Despite the fact that a comprehensive research strategy for EHS research has yet to be finalized, the prioritization processes taking place within individual agencies and the NNI appear so far to be reasonable. Numerous agency officials said their agency’s EHS research priorities were generally reflected both in the NEHI working group’s 2006 research needs and 2007 research prioritization reports. Our comparison of agency nanotechnology priorities to the NNI’s priorities corroborated their statements. Specifically, we found that all but one of the research priorities identified by individual agencies could be linked to one or more of the five general research categories. For example, OSHA’s need for toxicity data and information related to exposure is reflected in the two general research categories of Health and Environmental Exposure Assessment and Nanomaterials and Human Health. According to agency officials, the alignment of agency priorities with the general research categories is particularly beneficial to the regulatory agencies, such as CPSC and OSHA, which do not conduct their own research, but rely instead on research agencies for data to inform their regulatory decisions. In addition, we found that the primary purposes of agency projects underway in fiscal year 2006 were generally consistent with both agency priorities and the NEHI working group’s research categories. Of these 97 projects, 43 were focused on Nanomaterials and Human Health, including all 18 of the projects funded by NIH. In addition, EPA, NIOSH, and NSF each undertook research for this general research category. EPA and NSF funded all 25 projects related to Nanomaterials and the Environment. These two general research categories accounted for 70 percent of all projects focused on EHS risks. Reflective of its relatively large EHS research budget and broad mission, NSF sponsored projects in each of the five general research categories. In contrast, all the research projects NIST sponsored were related to Instrumentation, Metrology, and Analytical Methods. Agency research addressed each of the five general research categories and focused on the priority needs within each category to varying degrees. With the exception of the Human Health category, for which all specific needs were considered a top priority, 43 percent of projects addressed the two highest-priority needs in each category and 37 percent addressed the two lowest-priority needs. For example, 8 of the 11 projects in the Instrumentation, Metrology, and Analytic Methods category focused on the highest-priority need to “develop methods to detect nanomaterials in biological matrices, the environment, and the workplace.” In contrast, of the 25 projects related to Nanomaterials and the Environment, 3 addressed the highest-priority need in the category—”understand the effects of engineered nanomaterials in individuals of a species and the applicability of testing schemes to measure effects”—and 11 addressed the fourth- ranked priority—”determine factors affecting the environmental transport of nanomaterials.” Moreover, although the NEHI working group considered the five specific research priorities related to human health equally important, 19 of the 43 projects focused on a single priority— ”research to determine the mechanisms of interaction between nanomaterials and the body at the molecular, cellular, and tissular levels.” See table 3 for a summary of projects by agency and specific NEHI research priority. Despite the fact that the NEHI working group’s priorities reflect individual agency priorities, some environmental and industry groups have called for a more top-down and directed approach to the NNI’s prioritization efforts. In various congressional testimonies and in written comments on the NEHI working group’s draft reports, some groups have suggested that the NNI adopt a stronger, more autonomous role in setting the federal EHS research agenda. Some of these groups suggest that the NNI should have the authority to direct participating agencies to undertake research in specific EHS areas, its own budget authority, and the ability to shift EHS research dollars among the agencies. Proponents believe that this more centralized approach would help ensure that a cohesive EHS research strategy is implemented in a timely manner and that sufficient resources are dedicated to the highest-priority research. However, such a strategy may not be consistent with historical approaches used to set federal research priorities and would be difficult to implement given how federal research currently is funded. Federal expenditures for research and development are regular budget items and are contained, along with other types of expenditures, within the budgets of more than 20 federal agencies. For some of these agencies, research is a major activity, and for others, it is a smaller part of a much larger set of programs. Centralizing nanotechnology research expenditures in a single existing agency or new agency would be difficult to achieve. In addition, agency officials we spoke with were generally satisfied with the current bottom- up, consensus-based approach. Moreover, they said the process has benefited from the in-depth expertise each agency has developed. For example, NIH played a large role in shaping the priorities for Nanomaterials and Human Health; NIST was heavily involved with Instrumentation, Metrology, and Analytical Methods; and NIOSH was a major contributor to the development of priorities for Health and Environmental Exposure Assessment. Some officials acknowledged that while the current approach has limitations, it benefits from the input of a broader range of stakeholders. According to one official, information bubbles up through the NNI structure and is utilized to inform and create a top-down vision, which then serves to guide agency funding decisions. Agency and NNI processes to coordinate research and other activities related to the potential EHS risks of nanotechnology have been generally effective, and have resulted in numerous interagency collaborations. In fact, all eight agencies in this review have collaborated on multiple occasions with other NEHI-member agencies on activities related to the EHS risks of nanotechnology. These EHS-related activities are consistent with the expressed goals of the larger NNI—to promote the integration of federal efforts through communication, coordination, and collaboration. The NEHI working group is at the center of this effort. Regular NEHI working group meetings, augmented by informal discussions, have provided a venue for agencies to exchange information on a variety of topics associated with EHS risks, including their respective research needs and opportunities for collaborations. Interagency collaboration has taken many forms, including joint sponsorship of EHS-related research and workshops, the detailing of staff to other NEHI working group agencies, and various other general collaborations or memoranda of understanding. For example, FDA, NIST, and NIH’s Nanotechnology Characterization Laboratory have initiated formal agreements to collaborate on research to characterize the physical and biological properties of nanomaterials used in cancer diagnosis and treatment. An FDA official said that this arrangement was developed primarily through discussions that occurred as a result of the agencies’ participation in NEHI. Participation in NEHI has helped facilitate other types of interagency collaborations including a 2007 memorandum of understanding between EPA and NSF to create and fund research at a virtual Center for the Environmental Implications of Nanotechnology, detailing a CPSC toxicologist to a research laboratory office at EPA, and sponsoring international conferences on nanotechnology and occupational health by all NNI agencies, led by NIOSH, in 2005, 2006, and 2007. See table 4 for more examples of interagency collaboration. Furthermore, the NEHI working group has adopted a number of practices GAO has previously identified as essential to helping enhance and sustain collaboration among federal agencies. For example, NEHI’s 2005 “Terms of Reference” clearly defined its purpose and objectives and delineated roles and responsibilities for group members. Furthermore, collaboration through multiagency grant announcements and jointly sponsored workshops has served as a mechanism to leverage limited resources to achieve increased knowledge about potential EHS risks. Despite the general effectiveness of its collaboration efforts, the NEHI working group has not yet completed an overarching strategy to help align the agencies’ EHS research efforts. A completed strategy, combined with the results of the research needs prioritization process, also will serve as a means to monitor, evaluate, and report on the progress of meeting EHS research needs. In the meantime, the NNI’s annual Supplements to the President’s Budget have described the agencies’ activities related to EHS issues, among other things, and provided a mechanism to reinforce agency accountability and performance. Finally, all agency officials we spoke with expressed satisfaction with their agency’s participation in the NEHI working group, specifically, the coordination and collaboration on EHS risk research and other activities that have occurred as a result of their participation. Many officials described NEHI as unique among interagency efforts in terms of its effectiveness. Given limited resources, the development of ongoing relationships between agencies with different missions, but compatible nanotechnology research goals, is particularly important. NIH officials commented that their agency’s collaboration with NIST to develop standard reference materials for nanoparticles may not have occurred as readily had it not been for regular NEHI meetings and workshops. In addition, NEHI has effectively brought together research and regulatory agencies, which has enhanced planning and coordination. Many officials noted that participation in NEHI has frequently given regulators the opportunity to become aware of and involved with research projects at a very early point in their development, which has resulted in research that better suits the needs of regulatory agencies. Participation in NEHI is particularly important for agencies like CPSC, FDA, and OSHA that do not have dedicated budgets for nanotechnology research. Many officials also cited the dedication of individual NEHI working group representatives, who participate in the working group in addition to their regular agency duties, as critical to the group’s overall effectiveness. A number of the members has served on the body for several years, providing stability and continuity that contributes to a collegial and productive working atmosphere. In addition, because nanotechnology is relatively new with many unknowns, these officials said the agencies are excited about advancing knowledge about nanomaterials and contributing to the informational needs of both regulatory and research agencies. Furthermore, according to some officials, there is a shared sense among NEHI representatives of the need to apply lessons learned from the development of past technologies, such as genetically modified organisms, to help ensure the safe development and application of nanotechnology. Nanotechnology is likely to affect many aspects of our daily lives in the future as novel drug delivery systems, improved energy storage capabilities, and stronger, lightweight materials are developed and made available to the public. However, for a technology that may become ubiquitous, it is essential to consider the potential risks of using nanotechnology in concert with its potential benefits. The first steps are to identify what is not known about the properties of nanomaterials and what must be known about how these materials interact with our bodies and our environment. The NNI, through its NEHI working group, has begun a process to identify and prioritize both the research needed to better understand potential EHS risks and the gaps between what research is underway and the highest-priority needs. Essential to this process is consistent, accurate, and complete information on the amount of agency research designed to address and understand EHS risks. However, this information is not currently available because the totals reported by the NNI include research that is more closely related to uses of nanotechnology, rather than the risks nanotechnology may pose. Furthermore, agencies currently have limited guidance on how to report projects with more than one research focus across program component areas, when appropriate. As a result, the inventory of projects designed to address these risks is inaccurate and cannot ensure that agencies direct their future research investments appropriately. We recommend that the Director, OSTP, in consultation with the Director, NNCO, and the Director, OMB, provide better guidance to agencies regarding how to report research that has a primary focus to understand or address environmental, health, and safety risks of nanotechnology. We provided CPSC, FDA, EPA, NIH, NIOSH, NIST, NSF, OSHA, and OSTP with a copy of this report for review and comment. OSTP generally concurred with the report’s findings and agreed to review the manner in which agencies respond to the current guidance at future NSET meetings. In addition, the Department of Health and Human Services, on behalf of FDA, NIH, and NIOSH, said that the report clearly addressed the three charges that GAO was given and they provided technical comments which we incorporated as appropriate. In its comments, NIST said the report was fair and balanced. EPA, CPSC, NSF, and OSHA neither agreed nor disagreed with our report, and EPA and CPSC provided technical comments that we incorporated as appropriate. See appendices I, II, and III for agency comment letters from OSTP, HHS, and NIST, respectively. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees and Members of Congress, the Secretary of Commerce, Secretary of Health and Human Services, the CPSC Commissioner, the EPA Administrator, the FDA Commissioner, the NIH Director, the NIOSH Director, the NIST Director, the NSF Director, the OSHA Administrator, and the OSTP Director. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. In addition to the contact person named above, Cheryl Williams (Assistant Director), Nancy Crothers, Elizabeth Erdmann, David Lutter, and Rebecca Shea made key contributions to this report. | The National Nanotechnology Initiative (NNI), administered by the Office of Science and Technology Policy (OSTP), is a multiagency effort intended to coordinate the nanotechnology-related activities of 25 federal agencies that fund nanoscale research or have a stake in the results. Nanotechnology is the ability to control matter at the scale of a nanometer--one billionth of a meter. A key research area funded by some federal agencies relates to potential environmental, health, and safety (EHS) risks that may result from exposure to nanoscale materials. Because of concerns about federal efforts to fund and prioritize EHS research, GAO was asked to determine (1) the extent to which selected agencies conducted such research in fiscal year 2006; (2) the reasonableness of the agencies' and the NNI's processes to identify and prioritize such federal research; and (3) the effectiveness of the agencies' and the NNI's process to coordinate this research. GAO reviewed quantitative and qualitative data from five federal agencies that provided 96 percent of fiscal year 2006 funding for EHS research. The NNI reported that in fiscal year 2006,federal agencies devoted $37.7million--or 3 percent of the $1.3 billion total nanotechnology research funding--to research that was primarily focused on the EHS risks of nanotechnology. However, about 20 percent of this total cannot actually be attributed to this purpose; GAO found that 22 of the 119 projects identified as EHS-related by five federal agencies in fiscal year 2006 were not focused on determining the extent to which nanotechnology poses an EHS risk. Instead, the focus of many of these projects was to explore how nanotechnology could be used to remediate environmental damage or to detect a variety of hazards. GAO determined that this mischaracterization is rooted in the current reporting structure which does not allow these types of projects to be easily categorized and the lack of guidance for agencies on how to apportion funding across multiple topics. In addition to the EHS funding totals reported by the NNI, federal agencies conduct other research that is not captured in the totals. This research was not captured by the NNI because either the research was funded by an agency not generally considered to be a research agency or because the primary purpose of the research was not to study EHS risks. Federal agencies and the NNI are currently in the process of identifying and prioritizing EHS risk research needs; the process they are using appears reasonable overall. For example, identification and prioritization of EHS research needs is being done by the agencies and the NNI. The NNI also is engaged in an iterative prioritization effort through its Nanotechnology Environmental and Health Implications (NEHI) working group. NEHI has identified five specific research priorities for five general research categories, but it has not yet completed the final steps of this process, which will identify EHS research gaps, determine specific research needed to fill those gaps, and outline a long-term, overarching EHS research strategy. GAO found that the focus of most EHS research projects underway in fiscal year 2006 was generally consistent with agency priorities and NEHI research categories and that the projects focused on the priority needs within each category to varying degrees. The anticipated EHS research strategy is expected to provide a framework to help ensure that the highest priority needs are met. Agency and NNI processes to coordinate activities related to potential EHS risks of nanotechnology have been generally effective. The NEHI working group has convened frequent meetings that have helped agencies identify opportunities to collaborate on EHS risk issues, such as joint sponsorship of research and workshops to advance knowledge and facilitate information-sharing among the agencies. In addition, NEHI has incorporated several practices that are key to enhancing and sustaining interagency collaboration, such as leveraging resources. Finally, agency officials GAO spoke with expressed satisfaction with the coordination and collaboration on EHS risk research that has occurred through NEHI. They cited several factors they believe contribute to the group's effectiveness, including the stability of the working group membership and the expertise and dedication of its members. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Every 4 years, as part of the Quadrennial Defense Review, DOD conducts a comprehensive examination of the national defense strategy, force structure, force modernization plans, infrastructure, budget plan, and other elements of the defense program, and establishes a defense program for the next 20 years. This process helps ensure that DOD can effectively support the broader national security strategy of the United States. The 2001Quadrennial Defense Review Report was issued shortly after the September 11, 2001, terrorist attacks and outlines a new defense strategy to (1) assure allies and friends that the United States can fulfill its commitments, (2) dissuade adversaries from undertaking activities that threaten U.S. or allied interests, (3) deter aggression and coercion, and (4) decisively defeat any adversary, if deterrence fails. Operation Noble Eagle was an immediate response to the September 11, 2001, terrorist attacks; is intended to directly defend the homeland; and is ongoing. Operation Noble Eagle missions include combat air patrols over major American cities and enhanced security at federal installations. A combat air patrol is an airborne air defense activity involving fighter aircraft patrolling a given area. To support fighter coverage, other military activities have included aerial refueling and airborne early warning; comprehensive radio and radar coverage of the patrolled area; and command and control centers to direct fighter pilots when a threatening aircraft is detected. Concerns about terrorist threats to federal installations increased following the 9-11 attacks; therefore, DOD enhanced installation security to harden facilities against attacks and deter future attacks through the deployment of additional personnel (such as military police). In April 2002, the President approved a revision to DOD’s Unified Command Plan, creating the new U.S. Northern Command. U.S. Northern Command was activated on October 1, 2002, and is scheduled to be fully operational on October 1, 2003. Its area of responsibility includes the continental United States, Alaska, Canada, Mexico, and the surrounding waters out to approximately 500 nautical miles, which includes Cuba, the Bahamas, British Virgin Islands, and Turks and Caicos. Figure 1 displays U.S. Northern Command’s area of responsibility as indicated by the darkened boundary line. U.S. Northern Command is responsible for the air, land, and maritime defense of the continental United States. Its mission is to conduct operations to deter, prevent, and defeat threats and aggression aimed at the United States, its territories and interests within assigned areas of responsibility, and as directed by the President or Secretary of Defense, provide military assistance to U.S. civil authorities, including consequence management operations. In June 2002, the President proposed creation of the Department of Homeland Security and in November 2002, Congress approved legislation consolidating 22 federal agencies within the new department. In July 2002, the administration published the National Strategy for Homeland Security, which defines homeland security as a “concerted national effort to prevent terrorist attacks within the United States, reduce America’s vulnerability to terrorism, and minimize the damage and recover from attacks that do occur.” The National Strategy for Homeland Security broadly defines DOD’s contributions to national homeland security efforts to include the prosecution of military missions abroad that reduce the terrorist threat to the United States; military missions conducted within the United States that DOD conducts under extraordinary circumstances with support, as needed, by other agencies; and support to U.S. civil authorities under emergency circumstances, where DOD is asked to act quickly and provide capabilities that other agencies do not have or for limited scope missions where other agencies have the lead. In August 2002, DOD proposed the creation of a new Office of the Assistant Secretary of Defense for Homeland Defense. Congress approved it with passage of the Bob Stump National Defense Authorization Act for Fiscal Year 2003. The new office establishes a senior civilian officer within the Office of the Secretary of Defense with a principal focus on the supervision of the homeland defense activities of DOD (i.e., the assistant secretary supervises the execution of domestic military missions and military support to U.S. civil authorities and develops policies, conducts analyses, provides advice, and makes recommendations for these activities as well as emergency preparedness and domestic crisis management matters to the Under Secretary for Policy and the Secretary of Defense). The assistant secretary also supports the development of policy direction to the Commander of U.S. Northern Command and guides the development and execution of U.S. Northern Command plans and activities. The Assistant Secretary of Defense for Homeland Defense is also responsible for representing DOD when interacting with federal, state, and local government entities. In September 2002, the President released The National Security Strategy of the United States of America. The strategy identifies U.S. interests, goals, and objectives vital to U.S. national security; and explains how the United States uses its political, economic, military, and other elements of national power to protect or promote the interests and achieve the goals and objectives identified above. Military and nonmilitary missions differ in terms of roles, duration, acceptance, and capabilities normally employed. Generally, military missions are those primary warfighting functions that DOD performs in defense of the nation and at the direction of the President functioning as the Commander-in-Chief. Conversely, in nonmilitary missions, DOD provides military capabilities in support of U.S. civil authorities as directed by the President or Secretary of Defense. Table 1 provides more details on the key differences. Military missions involve warfighting functions, such as campaigns, engagements, or strikes, by one or more of the services’ combat forces. Operations Desert Storm in 1991 and Iraqi Freedom in 2003 are examples of overseas military missions, and Operation Noble Eagle is a domestic military mission started on September 11, 2001, and ongoing today. In the latter mission, the President directed the Commander, North American Aerospace Defense Command (NORAD), to order combat air patrols to identify and intercept suspect aircraft operating in the United States. Because this is a military mission, DOD is the lead federal agency and is prepared to apply its combat power, if needed. Requests for nonmilitary missions generally seek DOD support to help after the impact of natural or man-made disasters, or assist indirectly with law enforcement. These requests are evaluated against criteria contained in DOD’s Directive, Military Assistance to Civil Authorities. DOD’s directive specifies that requests for nonmilitary support be evaluated against the following criteria: legality (compliance with laws), lethality (potential use of lethal force by or against DOD forces), risk (safety of DOD forces), cost (who pays, impact on the DOD budget), appropriateness (whether it is in the interest of DOD to conduct the requested mission), and readiness (impact on DOD’s ability to perform its primary mission). According to DOD, in fiscal years 2001 and 2002, it supported over 230 nonmilitary missions, in a variety of settings, such as assisting in fighting wildfires, recovering from tropical storms, providing support for national security special events (such as the presidential inauguration and 2002 Olympic Games), and for other purposes. According to DOD, during this same period, it rejected several missions based on the above criteria. For example, in November 2001, DOD declined a request from the U.S. Capitol Police to provide military medical personnel; however, DOD did not indicate which criteria were used to reach this decision. Since September 11, 2001, the threat of another catastrophic terrorist event has altered some military operations. Before September 11, 2001, DOD generally emphasized deterring and defeating adversaries through overseas power projection, and still does. Since then, DOD has deployed U.S. forces overseas to prosecute the war on terrorism in Afghanistan and elsewhere. Moreover, The National Security Strategy of the United States of America, published after September 11, 2001, emphasizes preventing terrorist attacks against the United States. The strategy states that the immediate focus of the United States will be those terrorist groups having a global reach and any terrorist or nation that sponsors terrorism which attempts to gain or use weapons of mass destruction. Such threats may now be subject to a preemptive strike by U.S. military forces if necessary, to prevent these threats from materializing or reaching the United States. Some operations associated with domestic military missions have also changed to proactively respond to terrorist threats. Prior to September 11, 2001, DOD’s strategy defended air, land, and sea approaches to U.S. territory from military adversaries presumed to originate outside the United States. If necessary, DOD had planned to deploy U.S. military forces within the United States to counter the military threats. DOD still plans to do so should these threats emerge in the future. However, the current defense strategy, published in the 2001 Quadrennial Defense Review Report, states that the highest priority of the U.S. military is to defend the homeland from attack by any enemy, which includes terrorists. An example of how domestic military operations have changed to meet terrorists’ threats can be seen in NORAD operations. Before September 11, 2001, NORAD primarily focused its attention on aircraft approaching U.S. airspace and acted to prevent a hostile aircraft from entering U.S. airspace. Since then, NORAD has expanded its focus so that it now also monitors aircraft operating within the United States as well as aircraft approaching U.S. airspace. Also, before September 11, 2001, NORAD had planned to order Air Force units to intercept military adversaries’ bombers. NORAD still plans to do so if these threats emerge in the future. However, as of September 11, 2001, NORAD also orders combat air patrols over U.S. cities to prevent terrorist attacks. In another example, before the attacks of 9-11, many federal installations operated at a normal force protection condition or routine security posture that allowed for open access to the installations, in many cases. However, since then, DOD has used additional military personnel to enhance security by verifying identification of all personnel and vehicles entering the installation and conducting patrols of critical infrastructure on the installation. Also, in April 2002, the President approved a revision to DOD’s Unified Command Plan, creating the new U.S. Northern Command, which has responsibility to militarily defend the continental United States and other nearby areas. Moreover, DOD continues to support U.S. civil authorities for nonmilitary missions as it did prior to September 11, 2001. The 1878 Posse Comitatus Act prohibits the use of the Army and Air Force “to execute the laws” of the United States except where authorized by the Constitution or acts of Congress. Federal courts have interpreted “to execute the laws” to mean the Posse Comitatus Act prohibits the use of federal military troops in an active role of direct civilian law enforcement. Direct involvement in law enforcement includes search, seizure, and arrest. The act does not apply to military operations at home or abroad, and it does not apply to National Guard personnel when under the direct command of states’ governors. Congress has authorized DOD to use its personnel and equipment in a number of circumstances, for example, to: assist with drug interdiction and other law enforcement functions (10 U.S.C. §124 and 10 U.S.C. §§371-378 (excluding 375)); protect civil rights or property, or suppress insurrection (the Insurrection Statutes; 10 U.S.C. §§331-334); assist the U.S. Secret Service (18 U.S.C. §3056 Notes); protect nuclear materials and assist with solving crimes involving nuclear materials (18 U.S.C. §831); assist with some terrorist incidents involving weapons of mass destruction (10 U.S.C. §382); and assist with the execution of quarantine and certain health laws (42 U.S.C. §§97-98). The President identified as a major homeland security initiative a review of the legal authority for military assistance in domestic security, which would include a review of the Posse Comitatus Act. The President maintained that the “threat of catastrophic terrorism requires a thorough review of the laws permitting the military to act within the United States in order to determine whether domestic preparedness and response efforts would benefit from greater involvement of military personnel and, if so, how.” In addition to this review, Congress directed DOD to review and report on the legal implications of members of the armed forces operating on U.S. territory and the potential legal impediments affecting DOD’s role in supporting homeland security. In March 2003, the Commander of U.S. Northern Command stated, “We believe the Act, as amended, provides the authority we need to do our job, and no modification is needed at this time.” According to DOD, on May 29, 2003, DOD informed Congress of the results of its legal review, which concluded that the President has sufficient authority to order the military to provide military support to civilian law enforcement authorities, when necessary. DOD does not believe that the Posse Comitatus Act would in any way impede the nature or timeliness of its response. In response to adjustments in its strategic focus, DOD has created new organizations and is implementing a campaign plan for domestic military missions, but it has not evaluated or adjusted its force structure. The terrorist attacks of September 11, 2001, required that the nation, including DOD, take extraordinary actions on that day. In the new security environment, DOD continues to defend the United States at home against terrorists, which are nontraditional adversaries. We could not assess the adequacy of the organizational changes and the plan at the time of our review because the organizations were not yet fully operational, and the campaign plan was only recently completed. However, DOD has not evaluated its force structure for domestic operations and these forces remain organized, trained, and equipped to fight overseas military adversaries. Domestic military missions provide less opportunity to practice varied skills required for combat and consequently offer limited training value; thus, some forces have not been tailored to perform their domestic military missions. In addition, servicemembers are experiencing high personnel tempo. These factors indicate that the current mission approach may not be sustainable and risks eroding readiness. Two new organizations—the Office of the Assistant Secretary of Defense for Homeland Defense and U.S. Northern Command—together provide long-term policy direction, planning, and execution capability, but were not yet fully operational at the time of our review, because they had only recently been established and were not fully staffed. First, the Senate confirmed the President’s nominee to be Assistant Secretary of Defense for Homeland Defense in February 2003. The assistant secretary is to provide overall supervision for domestic military missions and military support to U.S. civil authorities. This office was not fully operational at the time our review was completed, with approximately two-thirds of the staff positions vacant. Second, U.S. Northern Command was activated only in October 2002 and was not planned to be fully operational before October 2003. As of mid-April 2003, only 46 percent of U.S. Northern Command’s staff positions had been filled. According to a U.S. Northern Command official, the command was grappling with the need to conduct its ongoing missions while staffing the command’s remaining positions. The activation of U.S. Northern Command provides unity of command for military activities within the continental United States. Prior to U.S. Northern Command’s activation, U.S. Joint Forces Command provided military forces to defend U.S. territory from land- and sea-based threats while NORAD defended the United States from airborne threats (and still does). The Commander of U.S. Northern Command is also the Commander of NORAD, thereby providing unity of command for air, land, and sea missions. DOD’s planning process requires DOD and the services to staff, train, and equip forces for their military missions as outlined in campaign plans and deliberate plans developed by the combatant commanders, including the Commander of U.S. Northern Command. U.S. Northern Command’s campaign plan was completed in October 2002 and is classified. Since the plan was only recently completed, the services have had little time to determine if training and equipment adjustments were needed to support the plan. DOD has not evaluated or adjusted its force structure, which generally remains organized, trained, and equipped to fight military adversaries overseas. However, some forces are not well tailored to perform domestic military missions. When performing domestic military missions, combat units are unable to maintain proficiency in combat skills through practice in normal training. Domestic missions to date generally have required only basic military skills and thus offered limited training value—which can have an adverse affect on unit readiness. In our review, we found that four Army military police combat units guarding federal installations in the United States could not train for battlefield conditions, as the Army requires. Similarly, Air Force fighter units performing domestic combat air patrols were inhibited from executing the full range of difficult, tactical maneuvers with the frequency that the Air Force requires. Moreover, from September 2001 through December 2002, the number of personnel exceeding the established personnel tempo thresholds increased substantially, an indicator that the present force structure may not be sufficient to address the increase in domestic and overseas military missions. To prevent significant near-term attrition from the force, a key concern during periods of high personnel tempo, DOD has used its stop loss authority to prohibit servicemembers affected by the order from leaving the service. Under high personnel tempo, U.S. forces could experience an unsustainable pace that may lead to an erosion of unit readiness for combat if servicemembers leave the service. While on domestic military missions, some servicemembers cannot practice their primary combat training to maintain proficiency. During Operation Noble Eagle, DOD provided enhanced domestic installation security and combat air patrols, both of which generally require only basic military skills but offer little opportunity to practice the varied combat skills needed for wartime proficiency. As a result, military readiness may erode. According to Army and Air Force officials, because combat skills for these units are perishable, to maintain or regain proficiency, a resumption of normal combat training may be required before subsequent overseas deployment. Army training focuses on combat mission performance that replicates battlefield conditions. To acquire the skills necessary for combat, each unit commander establishes a mission essential task list consisting of critical tasks that the unit needs to be proficient on to perform its overseas wartime mission. However, the four military police units that we reviewed were often unable to train and, thus, they were unable to maintain proficiency for their required mission essential tasks due to the long Operation Noble Eagle deployments. For example, one unit could not practice for two of its mission essential tasks—to establish and sustain an internment and resettlement facility, and process and account for internees—that it performs in combat. In another example, two military police units could not practice their combat skills, which include providing battlefield control of roads and logistical pipelines. Instead, the four Army military police units from the active, reserve, and National Guard we reviewed were generally guarding gates, checking identification, inspecting vehicles, and conducting security patrols of critical installation infrastructure, such as command and control centers, and housing, shopping, and recreation areas. Moreover, we found that some Army servicemembers on Operation Noble Eagle deployments used skills unrelated to their normal missions. Consequently, their units’ combat proficiency may be at risk. Specifically, the Army provided over 8,100 Army National Guard personnel from about 100 units to provide installation security at domestic Air Force bases. However, only one unit, a military police unit, had primary skills relevant to the mission; the remaining units were comprised of field artillery, engineer, and infantry personnel that have specialized combat skills such as providing fire support to tactical combat units; rehabilitating the combat zone to enhance lines of supply and communication; and destroying or capturing the enemy or repelling enemy assaults by fire. None of these units needed its combat skills on its Operation Noble Eagle missions. Similarly, the domestic combat air patrol mission represents another instance where servicemembers cannot always practice their primary combat training for proficiency. To maintain their warfighting skills, fighter pilots perform training sorties when not deployed abroad. Training sorties involve the employment of tactical maneuvers, and the use of weapons or weapons simulators against other aircraft or ground targets. For example, an offensive counterair-training sortie is designed to train for destroying, disrupting, or degrading enemy air and missile threats located in enemy territory. When on a domestic combat air patrol, a pilot may gain some training benefit by performing certain activities, such as an aerial refueling or a night landing. However, according to several Air Force officials, domestic combat air patrols do not constitute adequate training for overseas combat missions. For example, one Air Force official said that combat air patrols involve little more than making left turns flying in a circle in contrast to the difficult, tactical, defensive, and offensive maneuvers performed while on a training sortie or possibly on a combat mission. Air Force fighter units performing domestic combat air patrols are inhibited from executing the full range of difficult, tactical maneuvers with the frequency that the Air Force requires to maintain proficiency for their combat missions. For example, in one of the seven most heavily tasked Air National Guard fighter wings, the average pilot was unable to meet training requirements in 9 out of 13 months between September 2001 and September 2002. Another wing reported that Operation Noble Eagle had resulted in a 5-month period when no training was performed. Even a short-term tasking can inhibit training needed to maintain combat proficiency. According to Air Force officials, three training sorties are generally lost for every short-notice, 4-hour domestic combat air patrol performed. To mitigate the impact on pilot readiness, the Air Force rotates the units tasked to perform domestic combat air patrols when a continuous airborne alert posture is required. In doing so, the Air Force has sought to ensure that all fighter units are able to train sufficiently for overseas combat missions, thereby preserving flexibility in the use of these units for both domestic combat air patrols and for combat missions overseas. However, it is unclear whether managing the force structure in this way fully mitigates the impact on pilot training, particularly during periods of frequently performed domestic combat air patrol missions. According to one Air Force official, under the current force structure, domestic combat air patrols operating at levels experienced in the months after September 11, 2001, would not be sustainable for more than a few weeks before the units began suffering severe training effects and thus an erosion in military readiness. DOD is undertaking planned changes to the Defense Readiness Reporting System, which are designed to assess the impact of homeland defense and civil support missions on the readiness of forces to execute their warfighting mission. In March 2003, we reported that as of January 2003, DOD had not developed an implementation plan for the Defense Readiness Reporting System that contained measurable performance goals, identified resources, suggested performance indicators, or included an evaluation plan to assess progress in developing this system. Even though the new system may have the potential to improve readiness reporting, without an implementation plan there is little assurance that the new system will actually improve readiness assessments by the time of its expected full capability, in 2007. Without such a plan, it will also remain difficult to gauge progress toward meeting the 2007 target date. DOD did not agree with the recommendations from our March 2003 report that it (1) develop an implementation plan with, among other things, performance goals that are objective, quantifiable, and measurable, and (2) provide annual updates to Congress on the new readiness reporting system’s development. However, as stated in the March 2003 report, we retained those two recommendations because we continue to believe that it is important for DOD to develop an implementation plan to gauge progress in developing and implementing the new readiness reporting system and to provide annual updates to Congress. Personnel tempo data indicate that the current mission approach is significantly stressing U.S. forces. Between September 2001 and December 2002, personnel tempo increased dramatically for Army and Air Force personnel due to ongoing missions or commitments around the world and increasing support for Operations Noble Eagle and Enduring Freedom. DOD believes that if servicemembers spend too much time away from home, a risk exists that they will leave the service and that military readiness may ultimately suffer. Personnel tempo is the amount of time that a member of the armed forces is engaged in their official duties that makes it infeasible to spend off duty time at the member’s home, home port (for Navy servicemembers), or in the member’s civilian residence (for reserve components’ personnel). The National Defense Authorization Act for Fiscal Year 2000 requires that DOD formally track and manage for the number of days that each member of the armed forces is deployed, and it established two thresholds— servicemembers deployed more than 182 or 220 days away from home out of the preceding 365 days. The National Defense Authorization Act for Fiscal Year 2001 established a third threshold, which requires that servicemembers who are deployed for 401 or more days out of the preceding 730-day (2-year) period receive a $100 high deployment per diem allowance. DOD data indicate that tempo is high and increasing for active, reserve, and National Guard personnel. For example, in September 2001, over 6,600 Army personnel had exceeded the first threshold, spending 182 to 219 days away from home during the previous 365 days. By December 2002, that number had risen to over 13,000 (of which Army Reserve and Army National Guard personnel represented about 20 percent). During the same period, the number exceeding the second threshold and spending 220 to 365 days away had risen from about 800 to over 18,000 (which was comprised of about 75 percent Army Reserve and Army National Guard personnel), as shown in figure 2. The number of Army personnel exceeding the third threshold of 401 or more days away from home in the preceding 730 days increased slightly, starting at about 650 in September 2002 and rising to about 990 (of which about 35 percent were Army Reserve and Army National Guard personnel) in December 2002. The Air Force reported similar trends. In September 2001, about 2,100 Air Force servicemembers were away from home for 182 to 219 days, but that had risen to about 8,300 (which were comprised of about 75 percent Air Force Reserve and Air National Guard personnel) by December 2002. Also, as with the Army, Air Force servicemembers away 220 to 365 days had risen from about 1,600 to over 22,100 (of which Air Force Reserve and Air National Guard personnel represented about 70 percent), as shown in figure 3. The number of Air Force personnel exceeding the third personnel tempo threshold of 401 or more days away from home in the preceding 730-day period also increased during the latter period of 2002, starting at about 3,700 in September 2002 and rising to more than 8,100 (of which Air Force Reserve and Air National Guard personnel represented about 65 percent) in December 2002. DOD believes that the potential exists for retention problems stemming from high personnel tempo. To prevent servicemembers with key skills from leaving the services, DOD issued 23 orders since September 11, 2001, to prevent erosion in combat capabilities that may stem from attrition, an action known as stop loss authority. These orders affected personnel with designated individual job skills or, in some cases, all of the individuals in specific types of units that were critical for overseas combat and domestic military missions. However, many of the stop loss orders had been terminated since September 11, 2001. For example, the Navy’s individual stop loss order went into effect on April 27, 2003, and subsequently the Navy terminated this order in mid-May 2003. Table 2 shows the estimated number of personnel affected by the stop loss orders in effect as of April 30, 2003. Officials from the four services who manage the implementation of these orders cautioned that they are short-term tools designed to maintain unit-level military readiness for overseas combat and domestic military missions. Moreover, the officials added that the orders are not to be used as a long-term solution to address mismatches or shortfalls in capabilities and requirements, or as a substitute for the routine recruiting, induction, and training of new servicemembers. DOD must balance domestic and overseas missions with a renewed emphasis on homeland defense. Moreover, current operations both home and abroad are stressing the forces, as shown in personnel tempo data. Complicating the situation is the fact that some units are not well structured for their domestic missions, cannot practice the varied skills needed to maintain combat proficiency while performing domestic missions, and receive little training value from their assigned domestic duties. Therefore, military force readiness may erode and future personnel retention problems may develop, if action is not taken to address these problems. We recommend that the Secretary of Defense assess domestic military mission requirements and determine if steps should be taken to structure U.S. forces to better accomplish domestic military missions while maintaining proficiency for overseas combat missions. In written comments on a draft of this report, DOD generally concurred with the need to do an assessment that is expressed in our recommendation. DOD stated that our draft report provides an accurate assessment of DOD’s need to balance its domestic and overseas mission with a renewed emphasis on homeland defense. DOD added that our draft report describes the stress that high operational tempo could have on personnel. However, in its comments, DOD stated that it does not believe that an independent force structure assessment is required to better match force structure to perceived new domestic support requirements; rather, DOD stated that force structure changes should be determined through the ongoing force management processes that will culminate with the fiscal year 2005 Quadrennial Defense Review. If DOD can incorporate a force structure assessment as part of its ongoing force management processes, then it would generally fulfill the intent or our recommendation. However, we believe that DOD should examine the merits of taking actions to alleviate stress on the forces in the near term rather than wait until the fiscal year 2005 Quadrennial Defense Review because the missions causing the stress are continuing. Based on our analysis of personnel tempo trends through December 2002 and on discussions with officials conducting domestic military missions, we believe that U.S. military force readiness may erode because of the poor match between the types of forces needed for the domestic military missions we reviewed, the forces available, and the limited training value derived from the missions. Moreover, future personnel retention problems may develop in the meantime due to the pace of operations, which consequently may become unsustainable. Additionally, current operations in Iraq, which were not considered in our analysis of military personnel tempo data, can be expected to impact a significant portion of the military force structure for the foreseeable future. Lastly, homeland defense missions are another factor of military personnel tempo because these missions are ongoing. Therefore, we believe our recommendation is valid as originally drafted. DOD’s comments are reprinted in appendix II, along with our evaluation of them. In addition, DOD provided technical comments, which we incorporated as appropriate. We conducted our review from July 2002 through April 2003 in accordance with generally accepted government auditing standards. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to other appropriate congressional committees and the Secretary of Defense. We will also make copies available to other interested parties upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report please call me at (202) 512-6020 or e-mail me at [email protected]. The GAO contact and key contributors are listed in appendix III. To determine how the Department of Defense’s (DOD) military and nonmilitary missions differ and how they have changed since September 11, 2001, we conducted in-depth interviews with officials from the Office of the Secretary of Defense, including but not limited to the Office of the Executive Secretary, Office of the Special Assistant for Homeland Security, the Office of the Assistant Secretary of Defense for Homeland Defense, the Office of the Assistant Secretary of Defense for Reserve Affairs, and the General Counsel; the Joint Staff’s J-3 Directorate for Operations and J-5 Directorate for Strategic Plans and Policy; U.S. Joint Forces Command’s Joint Force Headquarters for Homeland Security; the Director of Military Support; the U.S. Army Reserve Command; the National Guard Bureau Homeland Defense Office; and the Army and Air National Guard. We visited and met with officials from U.S. Northern Command, who also provided detailed responses to our written questions, which we analyzed and used to continue a dialogue with the officials. We also analyzed documents prepared by U.S. Northern Command and the Joint Force Headquarters for Homeland Security. We reviewed DOD directives that govern civil support missions, including DOD Directive 3025.1 Military Support to Civil Authorities issued January 15, 1993, and DOD Directive 3025.15 Military Assistance to Civil Authorities issued February 18, 1997. Also, we analyzed Director of Military Support data for fiscal years 2001 and 2002 to learn about the types of nonmilitary support that DOD provided to federal agencies. To better understand DOD’s missions, we reviewed key documents such as the Secretary of Defense’s Annual Report to the President and the Congress for 2002, the National Strategy for Homeland Security, The National Security Strategy of the United States, the 2001 Quadrennial Defense Review Report, and the defense strategy issued as part of the 2001 Quadrennial Defense Review Report. To more fully understand the legal context of DOD’s civil support missions in the United States, we reviewed laws and defense directives relevant to DOD’s civilian support activities. We also examined the 1878 Posse Comitatus Act and its restrictions on direct DOD assistance to civilian law enforcement. We identified and examined a series of statutory exceptions to the Posse Comitatus Act. In addition, we reviewed DOD’s directives governing civil support missions and assistance to law enforcement to identify DOD’s criteria for accepting or rejecting requests for such assistance. To assess whether DOD’s organizations, plans, and force structure are adequate to address domestic military missions, we identified DOD’s new organizations and responsibilities with DOD officials and visited the U.S. Northern Command, reviewed plans, and compared the types of domestic missions performed by the forces with their primary missions. Specifically for DOD’s organizations, we reviewed appropriate documents, including the U.S. Northern Command Campaign Plan and the April 2002 revision to the Unified Command Plan, and we discussed organizational changes with knowledgeable officials throughout DOD. We also attended several congressional hearings that addressed the establishment of new organizations and their roles and responsibilities. With respect to understanding how plans address DOD’s domestic missions, we reviewed our prior audit work related to the review of the 2001 Quadrennial Defense Review Report and risk management. Also, we discussed DOD’s planning process with an official at the Office of the Secretary of Defense and at U.S. Northern Command and we discussed the development of the campaign plan with U.S. Northern Command officials. To obtain an understanding of whether forces performing domestic military missions are tailored to perform these missions, we selected two Operation Noble Eagle missions performed in the continental United States by DOD forces since September 11, 2001. Specifically, we reviewed installation security provided by Army military police units and combat air patrols flown by Air Force fighter units. We selected these specific missions because: (1) Joint Force Headquarters for Homeland Security officials indicated that Army military police combat units were deploying at high rates due to the events of September 11, 2001, and (2) the combat air patrol mission was the first domestic military mission performed under Operation Noble Eagle. To understand installation security missions, we interviewed officials at U.S. Forces Command; the U.S. Army Reserve Command; and the U.S. Army Training and Doctrine Command. We also visited and interviewed officials at military police combat units that deployed for these missions, including an Army active duty combat support company, an Army Reserve internment and resettlement battalion, and an Army National Guard guard company. We also conducted a 2-day videoconference with command officials from an Army National Guard combat support company. We analyzed documentation such as briefings, mission orders, and training documents from the four units. We selected these military police units judgmentally based on the deployment data received from U.S. Forces Command, taking into consideration the number of days the units had performed installation security; the number of personnel deployed on the missions; the type of military police unit involved; whether the unit was from the active Army, Army Reserve, or Army National Guard; and whether the unit completed its mission or would do so prior to the conclusion of our review. To better understand whether the skills required for installation security were well matched to the unit’s primary wartime missions, we compared the required combat training for these units to the types of duties they routinely performed for enhanced installation security. Further, we reviewed Army training regulations and manuals. We also analyzed data pertaining to the Army National Guard deployments to Air Force installations in the continental United States. We determined the types of units that deployed on these missions, including those most frequently deployed, and we examined the primary combat training requirements these units must perform to maintain combat proficiency in their particular specialties. To gain first-hand information about the combat air patrols, we interviewed officials at active duty Air Force and Air National Guard units that performed combat air patrol missions, and analyzed extensive operational, training, and maintenance data. To gain an understanding about operational requirements and command and control issues for combat air patrol missions, we interviewed officials at the Department of the Air Force; the Air National Guard; the Air Force Reserve Command; the Air Combat Command; the Continental United States Region, North American Aerospace Defense Command; and North American Aerospace Defense Command. We selected units to visit based on their participation in combat air patrols since September 11, 2001. We obtained and analyzed flying hours and sortie data for fiscal years 2001 and 2002 for fighter (F15 and F16) wings from Air Combat Command, the Air National Guard, and the Air Force Reserve Command. We also obtained and reviewed Air Force training instructions and unit training performance reports. To determine if military personnel experienced increases in time away from home while performing official military duties, we reviewed data for personnel tempo for each of the military services and their respective reserve components for the period October 1, 2000, through December 31, 2002 (the latest data available). The services report their data to the Defense Manpower Data Center under the direction of the Under Secretary of Defense for Personnel and Readiness. We obtained the Army’s data directly from the Army Personnel Command because at the time of our review, the Defense Manpower Data Center did not have the Army’s recent data in its information management system. To gain further insight into the personnel tempo data, we conducted in-depth interviews with officials from the Office of the Secretary of Defense for Personnel and Readiness, the Defense Manpower Data Center, and the Departments of the Army and the Air Force. We also reviewed DOD’s use of stop loss authority by obtaining the stop loss orders and estimates of affected personnel from officials in the Deputy Under Secretary of Defense for Military Personnel Policy, and each of the military services. We discussed the estimates with the officials to determine the most appropriate way to demonstrate the impacts of stop loss orders. We reviewed the data provided by the Army, Army Reserve, Army National Guard, Air National Guard, Air Force, Defense Manpower Data Center, and Army Personnel Command for completeness and reliability. For the analysis of flying hours and military police deployments, we found and corrected some errors in the data. Specifically, we found errors in the Air Force’s flying hour records and corrected the data by incorporating data provided by the affected unit. For military police deployments we found duplicate deployments in some cases and eliminated the duplicate records. For the analysis of Air Force, Marine Corps, Army, and Navy personnel tempo data, we found and corrected some errors where possible, and did not use the data or specific fields where the data were unreliable or we could not correct the problems. Specifically, for the Air Force data, we eliminated duplicate records and deleted all records of personnel who had overlapping duty dates. For all services, where the personnel tempo end date was missing, we assumed the personnel were still away from home and set the end date to a date after our analytic period. To the extent that the missing date represents completed duties where the end date had not been entered, we are overstating the number of personnel and the extent of days away from home. Through corroborating evidence from comparisons with other DOD data files and our corrections, we confirmed that the data we used present a reliable depiction of the active Army, Army Reserve, Army National Guard, active Air Force, and Air National Guard units involved in Operation Noble Eagle activities; and Army, Air Force, Navy, and Marine Corps personnel deployments from October 1, 2000, to December 31, 2002. The following are GAO’s comments on the Department of Defense’s letter dated June 30, 2003. 1. DOD stated that it is now studying and implementing significant changes in the force structure to better support civil authorities during domestic events. First, during our audit we were not presented with evidence of such studies as they relate to either civil support or homeland defense missions. Second, in our follow-up conversation with a DOD official concerning this statement, the DOD official did not provide specific information about the scope, content, or completion dates of the studies. Finally, DOD stated that it has adjusted its strategic and operational focus to encompass traditional military threats from hostile states, asymmetric threats posed by terrorists, and asymmetric threats posed by hostile states. Our draft report acknowledged the shifts for traditional military threats and the asymmetric threats posed by terrorists. Based on DOD’s comment, we added asymmetric threats posed by hostile states. 2. DOD stated that it is important for the report to note that DOD military forces are not first responders. Rather, DOD provides support as directed by the President or Secretary of Defense using defense capabilities to assist other federal, state, and local authorities in response to their requests. Additionally, DOD stated that our report fails to emphasize that DOD is not the long-term solution to the nation’s domestic prevention, response, and recovery requirements. Our report clearly states that DOD assesses requests from civil authorities based upon its own criteria from DOD Directive 3025.15, Military Assistance to Civil Authorities, and that DOD has some discretion to accept or reject these requests. Moreover, DOD suggested that we use this opportunity to recommend a solution involving the fostering of a more robust state and local response structure. We disagree. We did not comment on such a solution in our draft report because this type of assessment was outside the scope of our review. Ultimately, the President and Congress will determine the future role of DOD, if any, in domestic response missions. 3. DOD commented that our draft report does not mention the planned changes to the Defense Readiness Reporting System. According to DOD, the system’s changes are designed to assess the impact of homeland defense and civil support missions on the readiness of forces to execute their warfighting mission. At DOD’s request, we have incorporated information about this system on page 17. However, in March 2003, we reported that as of January 2003, DOD had not developed an implementation plan for the Defense Readiness Reporting System that contained measurable performance goals, identified resources, suggested performance indicators, or included an evaluation plan to assess progress in developing this system. 4. DOD commented that our draft report used non-standard terminology, referring to military missions (what DOD calls homeland defense) and nonmilitary missions (support to civil authorities). We added language on page 1 (see footnote 1) to establish the meaning of the terms used in our report. 5. DOD stated that it believes it is not clear that homeland defense and support to civil authorities missions are key factors in high personnel tempo. On the contrary, our draft report acknowledges that overseas missions as well as domestic missions contribute to high personnel tempo. Indeed, current personnel tempo could be even higher than is depicted in our draft report because the data displaying high personnel tempo stemming from participation in homeland defense missions or other deployments after December 2002, or from Operation Iraqi Freedom, were not yet fully available at the time of our review. In addition, the personnel tempo data we received from DOD did not record a servicemember’s assigned operation—for example, Operation Noble Eagle. However, we added a statement to footnote 28 in our report that acknowledges this limitation in the personnel tempo data we received. DOD also commented that since 9/11/01, increased requirements have been driven more significantly by overseas operations in Afghanistan, Iraq, and elsewhere in the war on terrorism. While DOD may be correct, our report discussed personnel tempo, not requirements. Personnel tempo refers to the amount of time during which a member of the armed forces is engaged in official duties at a location that makes it infeasible to spend off duty time at the servicemember’s home, homeport (for Navy servicemembers), or civilian residence (for reserve components’ personnel). Therefore, we stand by our finding that high personnel tempo is an indicator that present force structure may not be sufficient to address the increase in domestic and overseas military missions and could lead to an erosion of unit readiness. Lastly, because the assessment of rotating units to maintain combat readiness was outside the scope of our review, we could not evaluate DOD’s statements. 6. DOD commented that activities such as mobilization and preparation for war would almost certainly have an impact on the resources available to respond to homeland defense and support to civil authorities missions. DOD added that our draft report leaves the inaccurate impression that this situation is the norm. However, DOD did not specifically point out where the report suggested such an interpretation. We disagree that our report leaves an inaccurate impression, because it does not have statements implying this cause and effect. However, because servicemembers cannot be in both domestic and overseas locations at the same time, we believe that mobilization and preparation for any one mission, even including war, will necessarily make them unavailable for other missions. DOD also commented that it is important to note that, even during Operation Iraqi Freedom, over 200,000 soldiers and airmen were still available after the mobilization. We agree that a significant number of personnel have not been mobilized even during Operation Iraqi Freedom, but it is unclear what DOD’s figure means. DOD did not provide evidence to support this figure, and we believe that, in any case, it is tangential to our point—that, in general, some forces are not optimally suited to perform domestic military missions. We found that some forces’ skills are mismatched with the needs of domestic military missions and that these forces lose critical training opportunities. Thus, DOD’s statement that 200,000 servicemembers were available does not necessarily signify that these members are well suited for the missions at hand. Lastly, we did not discuss overseas missions at length in this report, because the report reviewed DOD’s domestic military missions. 7. DOD commented that when identifying Title 10 statutes that allow federal forces to perform domestic law enforcement missions, the report does not make clear that these missions are based on worst case scenarios and are not the norm. We agree that the use of federal forces to perform law enforcement missions is not the norm. As suggested by each of the authorized uses of federal forces in domestic law enforcement roles that we identified, such uses are in fact the exception rather than the rule. DOD is correct when it states that it undertakes missions to support civil authorities at the direction of the President or the Secretary of Defense, and, as DOD has pointed out, these missions may be undertaken upon requests for assistance from civil authorities. 8. DOD disagreed with our statement on page 14 that domestic military missions to date have offered limited training value because these missions generally have required only basic military skills. DOD stated that basic military skills require practice, just as do the more sophisticated skills. We agree that basic skills also need practice, and our report made clear that, while performing Operation Noble Eagle missions (such as domestic installation security and combat air patrols), forces are able to employ basic military skills. However, our discussions with service officials revealed that servicemembers were inhibited from executing the full range of difficult tactical maneuvers or from replicating battlefield conditions while deployed on Operation Noble Eagle missions. Moreover, we reviewed DOD training requirements for all the military skills of these forces, both basic and advanced, as well as the DOD requirements for their frequency of practice in order to ensure proficiency. Also, DOD asserts that there will be ample opportunity to increase readiness prior to operational employment. However, DOD did not explain how it could predict the amount of time available to prepare for a future contingency. In any case, based on DOD’s requirements, we have concluded that overall combat readiness may erode. In addition, based on the length or frequency for Operation Noble Eagle deployments that we reviewed, we concluded that although basic military skills have been frequently practiced, combat skills have not generally been practiced. As a result, the combat proficiency of many servicemembers could be jeopardized. Moreover, because DOD did not provide specific criteria for what constitutes the limited scope and duration of domestic missions, we cannot address these comments. Finally, Operation Noble Eagle began on 9/11/01, is continuing, and has no known end in sight, which raises questions about whether this is a “limited duration” mission. Therefore, we stand by our report as originally drafted. 9. In its comments, DOD pointed out that we concluded (now on p. 23) that some units are not well structured for their domestic missions, cannot practice the varied skills needed to maintain combat proficiency while performing domestic missions, and receive little training value from their assigned domestic missions. DOD then asserts that a temporary reduction in a unit’s effectiveness for its primary mission due to homeland security or peacekeeping missions is not necessarily a bad thing. A key DOD official explained to us that effectiveness refers to the extent to which a unit was successful in completing a mission to which it was assigned. However, we did not evaluate the extent to which any military units were successful in completing assigned missions, thus DOD’s comment missed our point. We believe that a unit’s readiness may erode in the future from performing a mission for which it was not designed. DOD also asserted that the ability of units to prepare for and execute a variety of missions with inherent capability adds flexibility. While DOD is apparently asserting that the missions we reviewed are adding flexibility and enhancing responsiveness, DOD did not explain how practicing the basic skills of flying aircraft and standing guard adds flexibility. Consequently, we stand by our conclusion. 10. DOD commented that the report confused the interpretation and application of the Posse Comitatus Act with regard to the use of the military to enforce the laws of the United States. We disagree. Our report identified and summarized laws associated with the 1878 Posse Comitatus Act. We explained the laws’ impact on requests for DOD assistance in domestic law enforcement operations. We also reported that DOD does not believe the act impedes the nature or timeliness of its response. 11. DOD commented that our report indicated that DOD did not complete a congressionally directed legal review on the use of military forces in the United States and any legal impediments affecting DOD’s role in supporting homeland security. We have updated our report to reflect information that DOD has recently provided to us, although DOD did not provide this report to us. In addition to the person named above, Deborah Colantonio, Richard K. Geiger, Kevin L. O’Neill, William J. Rigazio, Susan K. Woodward, Michael C. Zola, Rebecca Shea, and Arthur L. James Jr. also made key contributions to this report. Homeland Defense: Preliminary Observations on How Overseas and Domestic Missions Impact DOD Forces. GAO-03-677T. Washington, D.C.: April 29, 2003. Combating Terrorism: Observations on National Strategies Related to Terrorism. GAO-03-519T. Washington, D.C.: March 3, 2003. Major Management Challenges and Program Risks: Department of Homeland Security. GAO-03-102. Washington, D.C.: January 2003. Homeland Security: Management Challenges Facing Federal Leadership. GAO-03-260. Washington, D.C.: December 20, 2002. Homeland Security: Effective Intergovernmental Coordination Is Key to Success. GAO-02-1013T. Washington, D.C.: August 23, 2002. Reserve Forces: DOD Actions Needed to Better Manage Relations between Reservists and Their Employers. GAO-02-608. Washington, D.C.: June 13, 2002. Homeland Security: Key Elements to Unify Efforts Are Underway but Uncertainty Remains. GAO-02-610. Washington, D.C.: June 7, 2002. Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts. GAO-02-208T. Washington, D.C.: October 31, 2001. Combating Terrorism: Selected Challenges and Related Recommendations. GAO-01-822. Washington, D.C.: September 20, 2001. Combating Terrorism: Observations on Options to Improve the Federal Response. GAO-01-660T. Washington, D.C.: April 24, 2001. Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy. GAO-01-556T. Washington, D.C.: March 27, 2001. Military Personnel: Full Extent of Support to Civil Authorities Unknown but Unlikely to Adversely Impact Retention. GAO-01-9. Washington, D.C.: January 26, 2001. Combating Terrorism: Federal Response Teams Provide Varied Capabilities: Opportunities Remain to Improve Coordination. GAO-01-14. Washington, D.C.: November 30, 2000. Combating Terrorism: Linking Threats to Strategies and Resources. GAO/T-NSIAD-00-218. Washington, D.C.: July 26, 2000. Combating Terrorism: Observations on the Threat of Chemical and Biological Terrorism. GAO/T-NSIAD-00-50. Washington, D.C.: October 20, 1999. Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attacks. GAO/NSIAD-99-163. Washington, D.C.: September 7, 1999. Combating Terrorism: Issues to Be Resolved to Improve Counterterrorism Operations. GAO/NSIAD-99-135. Washington, D.C.: May 13, 1999. Combating Terrorism: Observations on Federal Spending to Combat Terrorism. GAO/T-NSIAD/GGD-99-107. Washington, D.C.: March 11, 1999. Combating Terrorism: Observations on Crosscutting Issues. GAO/T-NSIAD-98-164. Washington, D.C.: April 23, 1998. Combating Terrorism: Threat and Risk Assessments Can Help Prioritize and Target Program Investments. GAO/NSIAD-98-74. Washington, D.C.: April 9, 1998. Combating Terrorism: Spending on Governmentwide Programs Requires Better Management and Coordination. GAO/NSIAD-98-39. Washington, D.C.: December 1, 1997. Combating Terrorism: Federal Agencies’ Efforts to Implement National Policy and Strategy. GAO/NSIAD-97-254. Washington, D.C.: September 26, 1997. | The way in which the federal government views the defense of the United States has dramatically changed since September 11, 2001. Consequently, the Department of Defense (DOD) has adjusted its strategic and operational focus to encompass not only traditional military concerns posed by hostile states overseas but also the asymmetric threats directed at our homeland by both terrorists and hostile states. GAO was asked to review DOD's domestic missions, including (1) how DOD's military and nonmilitary missions differ; (2) how DOD's military and nonmilitary missions have changed since September 11, 2001; (3) how the 1878 Posse Comitatus Act affects DOD's nonmilitary missions; and (4) the extent to which DOD's organizations, plans, and forces are adequate for domestic military missions and the consequent sustainability of the current mission approach. DOD's military and nonmilitary missions differ in terms of roles, duration, acceptance, and capabilities normally employed. The threat of terrorism has altered some military operations. For example, as of September 11, 2001, the North American Aerospace Defense Command orders combat air patrols over U.S. cities to prevent terrorist attacks. The 1878 Posse Comitatus Act prohibits the direct use of federal military troops in domestic civilian law enforcement, except where authorized by the Constitution or acts of Congress. Congress has expressly authorized the use of the military in certain situations such as to assist with terrorist incidents involving weapons of mass destruction. DOD has established new organizations (such as U.S. Northern Command) and implemented a campaign plan for domestic military missions, but it has not evaluated or adjusted its force structure. GAO did not assess the adequacy of the new organizations or the campaign plan because the organizations were not yet fully operational, and the campaign plan was only recently completed. DOD's force structure is not well tailored to perform domestic military missions and may not be able to sustain the high pace of operations that preceded and followed the attacks on September 11, 2001. While on domestic military missions, combat units are unable to maintain proficiency because these missions provide less opportunity to practice the varied skills required for combat and consequently offer little training value. In addition, from September 2001 through December 2002, the number of servicemembers exceeding the established personnel tempo thresholds increased substantially, indicating that the present force structure may not be sufficient to address the increase in domestic and overseas military missions. As a result, U.S. forces could experience an unsustainable pace that could significantly erode their readiness to perform combat missions and impact future personnel retention. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Based on its experiences with the launching of short-range theater missiles by Iraq during the 1991 Persian Gulf War, DOD concluded that expanded theater missile warning capabilities were needed and it began planning for an improved infrared satellite sensor capability that would support both long-range strategic and short-range theater ballistic missile warning and defense operations. In 1994, DOD studied consolidating various infrared space requirements, such as for ballistic missile warning and defense, technical intelligence, and battlespace characterization, and it selected SBIRS to replace and enhance the capabilities provided by the Defense Support Program. The Defense Support Program is a strategic surveillance and early warning satellite system with an infrared capability to detect long-range ballistic missile launches that has been operational for about 30 years. DOD has previously attempted to replace the Defense Support Program with the Advanced Warning System in the early 1980s; the Boost Surveillance and Tracking System in the late 1980s; the Follow-on Early Warning System in the early 1990s; and the Alert, Locate, and Report Missiles System in the mid-1990s. These attempts failed due to immature technology, high cost, and affordability issues. SBIRS is to use more sophisticated infrared technologies than the Defense Support Program to enhance the detection of strategic and theater ballistic missile launches and the performance of the missile-tracking function. The SBIRS development effort consists of two programs—SBIRS-high and SBIRS-low. SBIRS-high is to consist of four satellites operating in geosynchronous earth orbit and sensors on two host satellites operating in a highly elliptical orbit. SBIRS-high will replace Defense Support Program satellites and is primarily to provide enhanced strategic and theater ballistic missile warning capabilities. The SBIRS-high program includes the consolidation of the three existing Defense Support Program ground facilities—two overseas and one in the United States—at a single U.S. ground station to reduce operations and maintenance costs. The program is in the engineering and manufacturing development phase, with a scheduled launch of the first SBIRS-high satellite in fiscal year 2005. The SBIRS-low program is currently in the program definition and risk reduction acquisition phase and is expected to consist of about 24 satellites in low earth orbit, but it could consist of more or less satellites, depending on the results of contractor cost and performance studies. The primary purpose of SBIRS-low is to support both national and theater missile defense by tracking ballistic missiles and discriminating between the warheads and other objects, such as decoys, that separate from the missile bodies throughout the middle portion of their flights. Its deployment schedule is tied to fiscal year 2010, the date when these capabilities are needed by the National Missile Defense System. According to DOD, the first SBIRS-low satellites need to be launched in fiscal year 2006 if full deployment is to be accomplished by fiscal year 2010. Due to the importance Congress has placed on the deployment of a National Missile Defense System, Congress has maintained a high level of interest in the SBIRS-low program and has included in legislative provisions dates by which the first satellites are to be launched and initial operational capability is to occur. The National Defense Authorization Act for Fiscal Year 2000 is the latest expression of such interest. It defines, in section 231, the SBIRS-low baseline schedule as a program schedule that includes a first launch of a SBIRS-low satellite to be made during fiscal year 2006. This provision also requires that before the Secretary of the Air Force makes any changes to the SBIRS-low baseline schedule he must obtain the approval of the Director of the Ballistic Missile Defense Organization. The Air Force’s current SBIRS-low acquisition schedule is at high risk of not delivering the system on time or at cost or with expected performance. Specifically, satellite development and production are scheduled to occur concurrently and the results of a 1-year flight test that is to test and finalize the design of the satellites will not be available until more than 5 years after the program enters production. The software required for SBIRS-low to perform all its missions is to be developed concurrent with the deployment of the satellites and is not to be completed until more than 3 years after the first SBIRS-low satellites are to be launched. Under the Air Force’s previous schedules for SBIRS-low, the results of an on-orbit flight demonstration of crucial satellite functions and capabilities were to be available and used to support the decision to enter satellite production; however, the current schedule does not provide such test results in time to support the production decision. In February 1999, the Air Force established the current acquisition schedule (see fig.1) for the SBIRS-low program, which includes a program definition and risk reduction phase, a concurrent development and production phase, and a 1-year on-orbit test with the first six SBIRS-low satellites produced (to be launched with two launches—three satellites per launch). The decision to enter the engineering and manufacturing development phase and production phases is to be made in the third quarter of fiscal year 2002. The 1-year on-orbit test, which is intended to test and finalize the design of the satellites, will not be completed until January 2008, more than 5 years after development and production is to start. In contrast, under previous schedules (see app. I), the Air Force had stressed the importance of on-orbit tests, stating they were critical to support the decision to enter production. According to the Air Force, its decision to enter the engineering and manufacturing development and production phases will now be based on information obtained from the ground demonstrations performed under the program definition and risk reduction contracts and from other completed on-orbit demonstration programs such as the Midcourse Space Experiment and the Miniature Sensor Technology Integration Program. These program results, however, may be of limited utility to SBIRS-low. For example, according to Air Force officials, they plan to use information on midcourse discrimination collected by the Midcourse Space Experiment in their decision concerning SBIRS-low development and production. However, according to DOD’s Director of Operational Test and Evaluation, the Midcourse Space Experiment did not collect discrimination data on objects representative of those that SBIRS-low must be able to discriminate. According to the Air Force, launches are not to be resumed until after the 1-year on-orbit test period has been completed, test results have been reviewed, and modifications, if required, have been made to the remaining satellites. However, the production of satellites will not stop during the 1-year on-orbit test. As a result, by the time the test is to be completed in fiscal year 2008, 9 satellites will have been produced (including the first 6 used for flight-testing), an additional 21 satellites will be in various stages of production, and at least $1.9 billion of the $2.4 billion (then-year) cost for these 30 satellites will have been expended or committed. Because the on-orbit test results for crucial functions and capabilities is not to be available until more than 5 years after the start of production, there is a risk that design changes will be required for satellites in production. For example, if parts that have already been purchased for the SBIRS-low operational satellites became obsolete because their acquisition was based on the initial system design, new parts may be required, program costs will increase, and the schedule will slip. Also, additional changes may be necessary to the satellite configuration that could affect not just long lead items, but also modifications may be required to satellite components already produced. In a July 1999 memorandum to the Under Secretary of Defense for Acquisition and Technology, DOD’s Director of Operational Test and Evaluation expressed concern that the new (current) schedule eliminated critical on-orbit experiments that were to be conducted under the flight demonstration. The Director stated that while the restructured program schedule includes ground demonstrations that were previously lacking from the SBIRS-low program, considering the many technical challenges and high risk in the program, DOD must seek every opportunity to obtain early on-orbit experience. According to the Director, many of the functions and capabilities that must be demonstrated (and would have been demonstrated under the flight demonstration) before SBIRS-low exits the program definition and risk reduction phase and enters the engineering and manufacturing development phase are impossible to demonstrate with only ground tests. For example, the Director stated that DOD has no flight experience where two or more satellites in low earth orbit have communicated with each other. He stated that this was challenging because of the dynamically changing positions of orbiting satellites relative to each other and the high data rates needed to transmit data between satellites thousands of kilometers apart. Another example cited by the Director where DOD has no flight experience is with coordinating the operation of acquisition and tracking infrared sensors, both of which are to be mounted on each SBIRS-low satellite. Specifically, when the acquisition sensor detects the heat from a missile’s booster motor, it must determine and relay highly accurate information on the missile’s position to the tracking sensor. The tracking sensor must then point to the proper location in space, find the missile, and begin tracking the missile. All of these activities must occur within short time frames (seconds) to support missile defense. We have reported on numerous occasions about the risks associated with program concurrency and of initiating production without adequate testing. In a 1990 testimony, we cited the Navy’s F/A-18 aircraft, the Air Force’s B-1B Bomber, and the Navy’s AEGIS Destroyer as examples where a rush to production without adequate testing resulted in increased costs, lower than expected performance, or both. In 1994 and 1995, we reported that programs are often permitted to begin production with little or no scrutiny and that the consequences have included procurement of substantial inventories of unsatisfactory weapons requiring costly modifications to achieve satisfactory performance, and in some cases, deployment of substandard systems to combat forces. In 2000, we reported that programs were allowed to begin production before the contractors and the government had conducted enough testing to know whether the systems’ design would meet requirements. In December 1999, the SBIRS-low program office concluded that development of software to perform all SBIRS-low missions, as originally scheduled, could not be completed 1 year before the scheduled first launch of SBIRS-low satellites in fiscal year 2006. According to the Air Force, this conclusion was based on lessons learned from other programs under which software development efforts were underestimated. As a result, to maintain the fiscal year 2006 first launch, the program office plans to use an evolutionary software development approach under which software is to be developed in increments. The software needed to support all SBIRS-low missions will not be completed (ready for use for satellite operations) until March 2010, over 3 years after the first satellites are launched. According to Air Force officials, DOD traditionally completes software required to support satellite systems 1 year before the scheduled first launch of a new satellite system. DOD established this practice to reduce risk by ensuring that all system problems have been identified and resolved, and that the personnel operating the systems have been adequately trained. This was the original plan for the SBIRS-low program. Under the evolutionary approach, software will be developed to support satellite launches, early on-orbit testing, ballistic missile defense, and integration with SBIRS-high, followed by the software needed to support ancillary missions, such as technical intelligence and battlespace characterization. Figure 2 shows the schedule for the incremental development and completion of the software relative to the launch and testing schedule for the SBIRS-low satellites. As figure 2 shows, by the time the on-orbit test period for the first six SBIRS-low satellites is to begin in fiscal year 2007, the first two increments of software are to be completed. According to program office officials, these two increments of software will provide all of the capabilities the ground control system and the satellites need to support and perform the on-orbit test. The third increment, the ground control and space related software required to operate the full satellite constellation in support of ballistic missile defense, is not to be completed until fiscal year 2008. The fourth software increment, which is to be completed in mid-fiscal year 2009, is to integrate SBIRS-low with SBIRS-high. The fifth increment, which is to be completed in mid-fiscal year 2010, is to add the software required for SBIRS-low to perform ancillary missions such as technical intelligence, battlespace characterization, and space surveillance. Thus, the software required to support all of SBIRS-low missions is not to be completed until over 3 years after the first satellites are launched. While this evolutionary approach reduces schedule pressure for completing the ground control and space software before the first launch in fiscal year 2006, it increases the risk that software may not be available when needed or perform as required. Under the traditional approach, all software would have been completed in fiscal year 2005, 1 year before the launch of the first satellites. The SBIRS-low program has high technical risks because some critical satellite technologies have been judged to be immature for the current stage of the program. Specifically, the SBIRS-low program office rated the maturity of five of six critical technologies at levels that constitute high risk the technologies will not be available when needed. In developing a complex system, an assessment of the maturity levels of critical technologies can provide information on the risks those maturity levels pose if the technologies are to be included in the development. For example, in a previous report, we discuss a tool, referred to as Technology Readiness Levels, the National Aeronautics and Space Administration and Air Force Research Laboratory use to determine the readiness of technologies to be incorporated into a weapon system. The readiness levels are measured along a scale of one to nine, starting with paper studies of the basic concept and ending with a technology that has proven itself in actual usage on the intended product. The Air Force Research Laboratory considers a readiness level of six to be an acceptable risk for a program entering the Program Definition and Risk Reduction phase—the Laboratory considers lower readiness levels at this stage to translate to high program cost, schedule, and performance risks. Reaching a readiness level of six denotes a significant transition point for technology development in which the technology moves from component testing in a laboratory environment to demonstrating a model or prototype in a relevant environment. At our request, the SBIRS-low program office rated the maturity, as of the start of the Program Definition and Risk Reduction phase, of six technologies critical to the success of the SBIRS-low program. The program office rated five of the six technologies at levels that, according to criteria used by the Air Force Research Laboratory, constitute high risk in the ability of the program to meet its objectives. A detailed description of the Technology Readiness Levels is provided in appendix II. As shown in figure 3, SBIRS-low entered the Program Definition and Risk Reduction phase with a number of critical subsystem technologies with maturities below a readiness level of six. Specifically, the program office rated the maturity of the (1) scanning infrared sensor, which is to acquire ballistic missiles in the early stages of flight, at a readiness level of four; (2) tracking infrared sensor, which is to track missiles, warheads, and other objects such as debris and decoys during the middle and later stages of flight, at a readiness level of four; (3) fore optics cryocooler and (4) tracking infrared sensor cryocooler, which are needed to cool the tracking sensor optics and other sensor components to enable the sensor to detect missile objects in space, at readiness levels of four; (5) satellite communications crosslinks, which enable satellites to communicate with each other, at a readiness level of five; and (6) on-board computer processors, critical for performing complex and autonomous satellite operations for providing missile warning and location information within short time frames, at a level of six. So critical are each of these subsystem technologies is that if one is not available when needed, SBIRS-low would be unable to perform its mission. And, in sum, five of six critical technologies are at a low maturity level, causing high program risk. Current DOD acquisition policy and procedures require that assessments be made of the cost and mission effectiveness of space systems to alternative terrestrial—land, sea, and air—systems. Despite this requirement, DOD has not adequately analyzed or identified cost-effective alternatives to SBIRS-low that could satisfy critical missile defense requirements such as a Navy ship-based radar capability. Compliance with this requirement would seem especially important, given the high risks identified with the SBIRS-low program. Terrestrial alternatives to SBIRS-low are not being considered. While competing SBIRS-low contractors are performing cost and trade studies on the various options that could satisfy program requirements, none of these studies is to consider the cost-effectiveness of terrestrial alternatives. The most recent study assessing alternatives to SBIRS-low was performed in 1994; however, according to an Air Force Space Command official, the study’s scope was focused only on options that would use space-based infrared sensors; terrestrial options were not included. According to Air Force Space Command officials, terrestrial alternatives to SBIRS-low do not exist. Studies on various aspects of the National Missile Defense System by the Ballistic Missile Defense Organization and other organizations have pointed out that alternatives to SBIRS-low may exist. For example, the Ballistic Missile Defense Organization’s June 1999 study, which assessed whether and how the Navy Theater Wide program, a DOD program to develop a ship-based theater missile defense capability, could be upgraded to provide a limited national missile defense capability, cited the potential utility of sea-based radars to a national missile defense capability. Specifically, the report states that properly deployed ship-based radars can provide a forward-based radar warning and tracking function against many of the potential ballistic missile threats to the United States, and that because the radars would be difficult to target due to the mobility and unknown locations of ships, the radars would add robustness against enemy attacks, particularly before SBIRS-low is available. In a 1999 RAND issue paper that dealt with an assessment of the planning for the National Missile Defense System, the authors suggest that ground-based radars could potentially be used to provide midcourse tracking and cueing for interceptors. Specifically, they conclude that the planned initial capability of the National Missile Defense System is inadequate and suggest that an interim solution be considered to enhance the system’s capabilities against more sophisticated, larger, and more geographically dispersed ballistic missile threats prior to the next planned enhancement to the missile defense system. They suggest that one aspect of the interim solution could include deploying additional ground-based radars to perform ballistic missile tracking and discrimination functions, or alternately, speeding the deployment of SBIRS-low. The Air Force is implementing a high-risk acquisition schedule for the SBIRS-low program in an attempt to deploy the system starting in fiscal year 2006 to support the National Missile Defense System. The highly concurrent acquisition schedule has evolved because of design, development, and technology challenges, as well as the importance Congress has placed on the deployment of a National Missile Defense capability. Although the schedule includes on-orbit tests to finalize satellite design and performance, the results will not be available in time to be useful for informed decision-making related to satellite design and production. In addition, the Air Force’s evolutionary software development approach creates risk because it delays completion of the software needed to support all SBIRS-low missions over 3 years after the first launch of SBIRS-low satellites. Finally, critical satellite technologies that have been judged to be immature for the current phase of the program, place program success in peril. Due to these deficiencies, the SBIRS-low program is at high risk of not delivering the system on time or at cost or with expected performance. In spite of the high risk that SBIRS-low will not be available to support the National Missile Defense System when needed, DOD has not identified alternatives or interim solutions. In order to reduce the cost, schedule, performance, and technical risks in the SBIRS-low program, we recommend that the Secretary of Defense direct the Secretary of the Air Force, with the approval of the Director of the Ballistic Missile Defense Organization, to develop a schedule that reduces concurrency and risks, and that sets more realistic and achievable cost, schedule, and performance goals. In addition, the Secretary of Defense should assess the impact of the revised schedule on the National Missile Defense program and provide the results of the assessment to Congress. We also recommend that the Secretary of Defense direct the Director, Ballistic Missile Defense Organization, to analyze and develop, as appropriate, and in compliance with DOD acquisition policy and procedures, alternative approaches to satisfy critical missile defense midcourse tracking and discrimination requirements in case SBIRS-low cannot be deployed when needed (based on the resulting lower risk SBIRS-low schedule, threat analyses, and missile defense program schedules). In written comments to a draft of this report, DOD generally agreed with our recommendations. DOD also pointed out that it is taking actions that it believes will address our recommendations. These actions begin to address our concerns, but they are not yet completed or approved, and it is not clear yet whether they will fully address the risks identified by our review. Therefore, our recommendations are still relevant. Our first recommendation deals with restructuring the SBIRS-low acquisition schedule to reduce cost, schedule, performance, and technical risks; assessing the impact of the restructured schedule on the National Missile Defense program; and providing the results of the assessment to Congress. DOD stated it has developed a proposed update to the SBIRS-low acquisition strategy that it believes addresses our concerns for concurrency in the production phase, while still retaining the fiscal year 2006 first launch date. For example, DOD’s proposed strategy would delay the full operational capability date of the first SBIRS-low constellation by 1 year and allow for additional ground demonstration program activities and on-orbit testing, thus reducing concurrency between production and testing, while maintaining the schedule for implementation of the full constellation. DOD stated that the Ballistic Missile Defense Organization is assessing the impact of this delay on the National Missile Defense program. This proposal has not been approved and will be reviewed for final decision by the Under Secretary of Defense for Acquisition, Technology, and Logistics in May 2001. Since DOD’s proposed update to the SBIRS-low acquisition strategy has not been approved (due to cost concerns) and will not be considered again for approval until May 2001, we did not assess the proposed strategy in any detail. On the surface, the additional on-orbit testing does somewhat reduce production concurrency. However, even with this additional testing, the program still appears to have high concurrency risk, for example, with substantial long lead time procurement before testing results are complete. Therefore, we believe our recommendation is still appropriate in relation to the new proposal or in light of any changes to DOD’s new proposal. With regard to our second recommendation, DOD stated that it has initiated a study to address viable alternatives to SBIRS-low capabilities and will provide the results of the study to the Deputy Secretary of Defense on March 1, 2001. While initiation of this study is a good beginning, until it is complete, we cannot assess the extent to which alternatives will be identified and whether critical missile defense requirements allocated to SBIRS-low will be satisfied. DOD’s comments are reprinted in appendix III. DOD also provided separate technical comments that we have incorporated in this report where appropriate. To evaluate risks of the current acquisition schedule, we had discussions with officials of the Under Secretary of Defense for Acquisition, Technology, and Logistics; the Under Secretary of Defense for the Comptroller; the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence; the Office of the Director, Operational Test and Evaluation; the Office of Program Analysis and Evaluation; and the Assistant Secretary of the Air Force for Acquisition, all in Washington, D.C. We also held discussions with, and reviewed documents from, officials of the SBIRS program office in Los Angeles, California; the U.S. and Air Force Space Commands, Colorado Springs, Colorado; the Defense Contract Management Agency offices in Van Nuys, California, and Phoenix, Arizona; the Air Force Operational Test and Evaluation Center, Buckley Air National Guard Base, Aurora, Colorado; TRW, Inc., Redondo Beach, California; and Spectrum Astro, Gilbert, Arizona. To evaluate technical risks of the program, we had discussions with, and reviewed documents from, officials of the program office; the Office of the Director, Operational Test and Evaluation; and the Air Force Research Laboratory, Albuquerque, New Mexico. We also discussed technical risks with TRW and Spectrum Astro. To determine whether DOD has assessed alternative approaches to SBIRS-low, we had discussions with the Ballistic Missile Defense Organization, in Washington, D.C.; the program office; and U.S. and Air Force Space Commands. We also reviewed two related studies by the Ballistic Missile Defense Organization and the RAND Corporation. We performed our work from May 1999 through December 2000 in accordance with generally accepted government auditing standards. If you or your staff have any questions concerning this report, please call me on (404) 679-1900. The GAO contact and staff acknowledgments are listed in appendix IV. The Department of Defense’s (DOD) original 1995 schedule for Space-Based Infrared System-low (SBIRS)-low called for (1) a launch of a two-satellite flight demonstration—both satellites on one launch vehicle—in the first quarter of fiscal year 1999; (2) a deployment decision in fiscal year 2000 after key technologies and operating concepts were validated by the demonstration satellites; and (3) launches of SBIRS-low satellites—3 satellites per launch vehicle—beginning in fiscal year 2006. According to Air Force officials, the satellite flight demonstration was critical to validate the integration of key technologies and operational concepts that are crucial to national missile defense and other SBIRS missions such as technical intelligence and battlespace characterization. The primary emphasis was to be on the ability to detect and track ballistic missiles and their warheads throughout flight and distinguish between missile warheads and decoys. The Air Force planned to test these satellites’ ability to perform national missile defense functions against live theater and national missile defense targets and to use the demonstration and test results to model and simulate the full performance capability of a constellation of operational SBIRS-low satellites. According to the program officials who established this acquisition strategy, performing this function autonomously while in orbit is one of the most complex and technologically challenging operational concepts ever attempted. They also stressed that the two-flight demonstration satellites would have provided an informed basis for deciding whether the program was ready to enter the engineering and manufacturing development and production phases of the acquisition process. They stated that a National Missile Defense System with space-based sensors depended on a successful flight demonstration program and that proceeding into the engineering and manufacturing development and production phases before demonstrating this capability would not provide an opportunity to assess lessons learned, thus introducing unacceptable risk into the program. Figure 4 shows the original acquisition schedule for a fiscal year 2006 first launch of SBIRS-low satellites. Under this schedule, the first year of the planned 2-year flight demonstration would have been completed in the first quarter of fiscal year 2000, about the same time the program was scheduled to enter the pre-engineering and manufacturing development phase. The first year results from the demonstration could have influenced requirements development and system design during this phase. The second year of the demonstration would have been completed in the first quarter of fiscal year 2001, about the same time the program was scheduled to enter the engineering and manufacturing development and production phases. Thus, DOD would have had almost 2-years of information on the demonstration satellites’ performance to consider in deciding whether the system should enter the engineering and manufacturing development and production phases. DOD did not implement the original schedule because Congress required in the National Defense Authorization Act for Fiscal Year 1996 that DOD establish a program baseline to include a first launch of SBIRS-low satellites in fiscal year 2002. The Defense Science Board, at DOD’s request, assessed the viability of accelerating the first launch from fiscal year 2006 to fiscal year 2002 and found it would not be viable; however, it did determine that the first launch could be accelerated to fiscal year 2004. Subsequently, DOD informed Congress that the first launch of SBIRS-low satellites could not begin in fiscal year 2002 because technical, funding, and management problems had delayed the scheduled launch of the two demonstration satellites from the first quarter to the third quarter of fiscal year 1999. According to Air Force officials, this delay prevented basing a milestone decision to enter the engineering and manufacturing development and production phases of the SBIRS-low acquisition process, scheduled for the first quarter of fiscal year 2000, on the results of the planned flight demonstration. However, in December 1996, DOD committed to accelerating the first launch of SBIRS-low satellites to fiscal year 2004. Figure 5 shows the acquisition schedule for the flight demonstration and a fiscal year 2004 first launch of SBIRS-low. Under this acquisition schedule, the demonstration satellites were to be launched in the third quarter of fiscal year 1999, two quarters later than scheduled under the original schedule. Consequently, the flight demonstration and the pre-engineering and manufacturing development phase would have run concurrently and the demonstration results could not have influenced the development of requirements and the system design as they could have under the original schedule. However, the first year of the flight demonstration would still have been completed about 4 months before the start of the engineering and manufacturing development and production phases, which were still scheduled to begin in the first quarter of fiscal year 2001 as they were under the original schedule. As a result, DOD would have had the information from the first year of the demonstration satellites’ performance, which it considered the most critical in deciding whether the system should enter these phases, to support a fiscal year 2004 deployment. Lowest level of technology readiness. Scientific research begins to be translated into applied research and development. Examples might include paper studies of a technology’s basic properties. Invention begins. Once basic principles are observed, practical applications can be invented. The application is speculative, and there is no proof or detailed analysis to support the assumption. Examples are still limited to paper studies. Active research and development is initiated. This includes analytical studies and laboratory studies to physically validate analytical predictions of separate elements of the technology. Examples include components that are not yet integrated or representative. Basic technological components are integrated to establish that the pieces will work together. This is relatively “low fidelity” compared to the eventual system. Examples include integration of “ad hoc” hardware in a laboratory. Fidelity of breadboard technology increases significantly. The basic technological components are integrated with reasonably realistic supporting elements so that the technology can be tested in a simulated environment. Examples include “high fidelity” laboratory integration of components. Representative model or prototype system, which is well beyond the breadboard tested for technology readiness level 5, is tested in a relevant environment. Represents a major step up in a technology’s demonstrated readiness. Examples include testing a prototype in a high fidelity laboratory environment or in simulated operational environment. Prototype near or at planned operational system. Represents a major step up from technology readiness level 6, requiring the demonstration of an actual system prototype in an operational environment, such as in an aircraft, vehicle, or space. Examples include testing the prototype in a test bed aircraft. Technology has been proven to work in its final form and under expected conditions. In almost all cases, this technology readiness level represents the end of true system development. Examples include developmental test and evaluation of the system in its intended weapon system to determine if it meets design specifications. Actual application of the technology in its final form and under mission conditions, such as those encountered in operational test and evaluation. In almost all cases, this is the end of the last “bug fixing” aspects of true system development. Examples include using the system under operational mission conditions. The following are our comments on DOD’s letter dated December 14, 2000. 1. We are cognizant of the fact that the National Missile Defense program is driving the need date for SBIRS-low and not the converse. We do not intend to suggest that the SBIRS-low acquisition schedule be a driver for the National Missile Defense program schedule. Our primary goal in making this recommendation is to help ensure SBIRS-low is acquired at lower risk and will satisfy critical missile defense requirements. This is why we are also making our second recommendation—to develop alternative approaches to satisfy critical missile defense midcourse tracking and discrimination requirements in case SBIRS-low (under a new lower risk schedule) cannot be deployed when needed. 2. We disagree that we misstate the risks of the SBIRS-low incremental software development strategy. We recognize that an evolutionary, or incremental, approach to software development is valid. However, an acquisition approach such as the original SBIRS-low approach that calls for the completion of all software prior to the first launch poses less risk than one that does not, that is, the evolutionary or current approach. From the perspective of meeting the schedule for a first launch in fiscal year 2006, the evolutionary software development approach may reduce schedule risk because, according to the Air Force, the first launch date would be unachievable under the original strategy due to an underestimation of the software development effort. However, from the perspective of comparing the evolutionary software development approach with the original approach, there is increased program risk associated with the evolutionary approach because there is less assurance the software will be completed when needed with the mission capabilities specified. 3. While we agree that a revised acquisition strategy would likely increase costs, cost increases associated with program delays or rework could also occur under the current schedule. Due to the highly concurrent acquisition schedule, we believe that there is substantial risk that delays and rework resulting from the production of hardware and software that fail to satisfy requirements may occur—resulting in cost increases if the current schedule is strictly adhered to. We believe that early effort to understand acquisition options and the associated costs is important. In addition to the person named above, Ted B. Baird, Richard Y. Horiuchi, and Robert W. Stewart of the Denver Field Office and David G. Hubbell and Dale M. Yuge of the Los Angeles Field Office made key contributions to this report. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system) | The Pentagon considers defenses to counter attacks from ballistic missiles, both long-range strategic and shorter-range theater missiles, to be critical to U.S. national security. The Air Force is developing a new satellite system, called Space-Based Infrared System-low (SBIRS-low) to expand the military's infrared satellite capabilities for supporting ballistic missile defenses. GAO reviewed the Defense Department's (DOD) efforts to acquire SBIRS-low. Specifically, GAO (1) evaluated the cost, schedule, and performance risks of the current acquisition schedule; (2) evaluated the program's technical risks; and (3) determined whether DOD has assessed alternative approaches to SBIRS-low. GAO found that the Air Force's current SBIRS-low acquisition schedule is of high risk of not delivering the system on time or at cost or with expected performance. SBIRS-low has high technical risks because some critical satellite technologies have been judged immature for the current stage of the program. DOD acquisition policy and procedures require that the cost and mission effectiveness of space systems be assessed relative to alternative terrestrial systems. However, the Air Force has not analyzed or identified terrestrial alternatives to the SBIRS-low system because, according to Air Force Space Command officials, terrestrial alternatives do not exist. Nevertheless, studies on various aspects of the National Missile Defense system by the Ballistic Missile Defense Organization and other groups have pointed out alternatives to SBIRS-low, such as sea- or land-based radar. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Since 1974, the SSI program, under title XVI of the Social Security Act, has provided benefits to low-income blind and disabled persons—adults and children—who meet financial eligibility requirements and SSA’s definition of disability. SSA determines applicants’ financial eligibility; state DDSs determine their medical eligibility. DDSs are state agencies that are funded and overseen by SSA. To meet the financial test, children must be in families with limited incomes and assets. In 1994, children’s federally administered SSI payments totaled $4.52 billion. Depending on the family’s income, an eligible child can receive up to $458 per month in federal benefits; 27 states also offer a supplemental benefit payment. Because SSI is an individual entitlement, no family cap exists on the amount of benefits received in a household. With SSI eligibility usually come other in-kind benefits, most notably Medicaid and Food Stamps. “to engage in any substantial gainful activity by reason of any medically determinable physical or mental impairment which can be expected to last a continuous period of not less than twelve months.” Because children are not expected to work, however, this definition is not applicable to measure disability in children. At a DDS, childhood disability determinations are made by an adjudication team consisting of an examiner and a medical consultant. For mental impairments, the consultant must be a psychiatrist or child psychologist. The examiner collects all medical evidence—physical and mental—either from medical sources who have treated the applicant or from an independent consultant if more medical information is needed. The examiner supplements the medical information with accounts of the child’s behavior and activities from the child’s teachers, parents, and others knowledgeable about the child’s day-to-day functioning. Working together, the DDS adjudication team determines whether the applicant’s medical condition matches or is equivalent to an impairment found in SSA’s listing of medical impairments. If so, benefits are awarded. If, however, the applicant’s condition is not severe enough to meet or equal the severity criteria in SSA’s medical listings, the team uses the evidence to perform an IFA. If the IFA shows the child’s impairment substantially reduces his or her ability to function age-appropriately, benefits are awarded. If not, a denial notice is issued, and applicants are informed of their appeal rights. During a 2-month period, SSA issued two sets of new regulations that significantly changed the criteria for determining children’s eligibility for SSI disability benefits. One set of regulations, issued in accordance with the Disability Benefits Reform Act of 1984 (DBRA), revised and expanded SSA’s medical listings for evaluating mental impairments in children to incorporate recent advances in medicine and science. The second set of regulations was issued in response to the Sullivan v. Zebley Supreme Court decision, which required SSA to make its process for determining disability in children analogous to the adult process. Both sets of regulations placed more emphasis on assessing how children’s impairments limit their ability to act and behave like unimpaired children of similar age. Both also emphasize the importance of obtaining evidence from nonmedical sources as part of this assessment. SSA issued new regulations in accordance with DBRA on December 12, 1990. These new regulations revised and expanded SSA’s medical listings for childhood mental impairments to reflect up-to-date terminology used by mental health professionals and recent advances in the knowledge, treatment, and methods of evaluating mental disorders in children. The new medical listings for mental impairments provided much more detailed and specific guidance on how to evaluate mental disorders in children than the former regulations, which were published in 1977. In particular, the new medical listings placed much more emphasis on assessing how a child’s mental impairment limits his or her ability to function in age-appropriate ways. SSA made this change because mental health professionals consider functional factors particularly important in evaluating the mental disorders of children. The former medical listings for mental impairments emphasized the medical characteristics that must be met to substantiate the existence of the impairment. Specific areas of functioning sometimes were and sometimes were not mentioned as a factor in this determination. In contrast, the new medical listings provide much more detailed guidance on assessing the functional aspects of each impairment. The standard for most impairments is divided into two parts: medical and functional criteria, both of which must be satisfied for the applicant to qualify for a benefit. The functional criteria are described in terms of the age of the child and the specific areas of functioning—such as social, communication/ cognition, or personal/behavioral skills—that must be assessed. The new medical listings emphasize the importance of parents and others as sources of nonmedical information about a child’s day-to-day functioning. In general, the childhood mental listings require children over 2 years old to have marked limitations in two of the four areas of functioning to qualify for benefits. Further, when standardized tests are available, the listing defines the term “marked” as a level of functioning that is two standard deviations below the mean for children of similar age. The new medical listings also classified childhood mental disorders into more distinct categories of mental impairments. Previously, there were 4 impairments listed—mental retardation, chronic brain syndrome, psychosis of infancy and childhood, and functional nonpsychotic disorders—now there are 11. Several of the newly listed impairments, such as autism and other pervasive developmental disorders, mood disorders, and personality disorders, describe impairments that were previously evaluated under one or more of the four broader categories of childhood mental impairments. Several other impairments are mentioned for the first time, such as attention deficit hyperactivity disorder and psychoactive substance dependence disorders. “does not account for all impairments ’of comparable severity’ , and denies child claimants the individualized functional assessment that the statutory standard requires . . . .” To determine adults’ eligibility for disability benefits, SSA uses a five-step sequential evaluation process. Before Zebley, it used only a two-step process to determine children’s eligibility for benefits. (See fig. 1.) Children were awarded benefits only if their impairments met or equaled the severity criteria in SSA’s medical listings. All other children were denied benefits. In contrast, adults whose conditions were not severe enough to qualify under the medical listings could still be found eligible for benefits if an assessment of their residual functional capacity (RFC) showed that they could not engage in substantial work. No analogous assessment of functioning was done for children who did not qualify under the medical listings. “an inquiry into the impact of an impairment on the normal daily activities of a child of the claimant’s age—speaking, walking, dressing and feeding oneself, going to school, playing, etc.” Although the Court required the functional assessment, it did not define the degree of limitation necessary to qualify for benefits, except by analogy to the adult definition of disability. To implement the Zebley decision, SSA convened a group of experts in April 1990 to help formulate new regulations using age-appropriate functional criteria. Included were experts in general and developmental pediatrics, child psychology, learning disorders, and early and adolescent childhood education as well as advocates from groups such as Community Legal Services in Philadelphia (plaintiff’s counsel in the Zebley case), the Association for Retarded Citizens, and the Mental Health Law Project. SSA also consulted with its regional offices and the state DDSs. Building on the functional criteria added to the listings after DBRA, SSA issued regulations implementing the Supreme Court’s decision on February 11, 1991. According to these regulations, for the child to be eligible for disability benefits, the IFA must show that the child’s impairment or combination of impairments limits his or her ability “to function independently, appropriately, and effectively in an age-appropriate manner.” Specifically, the impairment must substantially reduce the child’s ability to grow, develop, or mature physically, mentally, or emotionally to the extent that it limits his or her ability to (1) attain age-appropriate developmental milestones; (2) attain age-appropriate daily activities at home, school, play, or work; or (3) acquire the skills needed to assume adult roles. Although SSA officials describe these as state-of-the-art criteria for assessing children’s functioning, they concede that many of these concepts are not clear cut. As a result of these regulations, DDSs now perform IFAs to assess the child’s social, communication, cognitive, personal and behavioral, and motor skills, as well as his or her responsiveness to stimuli and ability to concentrate, persist at tasks at hand, and keep pace. Like the DBRA regulations, the IFA process requires DDSs to supplement medical information with information about the child’s behavior and activities from the child’s teachers, parents, and others knowledgeable about the child’s day-to-day functioning in order to make these assessments. Generally, if the IFA shows that a child has a moderate limitation in three areas of functioning or a marked limitation in one area and a moderate limitation in another, benefits are awarded. In contrast, the more restrictive functional criteria under SSA’s mental listings require two marked limitations. In addition to measuring functioning as part of the IFA process, the Zebley regulations added the concept of functional equivalence to SSA’s medical listings. Before Zebley, a child qualified for benefits only if his or her impairment met or was medically equivalent to the severity criteria in the listings. After Zebley, a child could qualify if his or her impairment was functionally equivalent to an impairment in the medical listings, as long as there was a direct, medically determinable cause of the functional limitations. The regulations provide 15 examples of conditions—such as the need for a major organ transplant—presumed to be functionally equivalent to the listed impairments. Of the 646,000 children added to the SSI rolls from February 1991 through September 1994, about 219,000 (one-third) were awarded benefits based on the less restrictive IFA process. If all 219,000 children receive the maximum benefit, their SSI benefits would cost about $1 billion a year. About 84 percent of these children had a mental impairment as their primary limitation, and about 16 percent had physical impairments. (Fig. 2 shows a breakdown of the impairments.) Figure 3 shows the substantial increase in the number of awards. Much of this increase was due to the implementation of new medical listings for mental impairments. The IFA process also added to the growth in the rolls and accounted for a substantial portion of new awards. Figure 3 also shows that the average monthly number of applications jumped dramatically after Zebley and has continued to grow. Many observers attribute this increase in applications to the publicity surrounding Zebley, as well as to increased outreach by SSA, some of which was congressionally mandated. Also, some of the increase in awards may have been attributable to the close scrutiny of the IFA process by courts and disabled child advocates, which some believe may have resulted in some DDSs feeling pressured to increase their award rates during the 1991-1992 period. (App. II provides a chronology of their actions.) Before the IFA process was introduced in 1991, the national award rate for all types of childhood cases was 38 percent, but the award rate jumped to 56 percent in the first 2 years after the IFA and DBRA regulations were issued. More recently, during 1993 and 1994, the award rate has dropped dramatically. The national award rate for 1994 was 32 percent—lower than it was in the 2 years before Zebley. Our review indicates that the IFA process has been difficult to implement consistently and reliably, particularly for children with mental impairments, because the process requires adjudicators to make a series of judgment calls in a complex matrix of assessments about age-appropriateness of behavior. SSA and IG studies of children with mental impairments have borne out these difficulties. Although SSA has tried to add rigor to the IFA process through guidance and training, we believe that problems will likely continue because of the difficulties inherent in using age-appropriate behavior as an analog for the adult vocational assessment of residual functional capacity. Determining disability for children with impairments that are not severe enough to match a listed impairment can be a highly subjective process. SSA designed the IFA process to provide DDS adjudicators with a structure to help them make uniform and rational disability determinations for children with less severe impairments. Even so, the necessity to assess a child’s ability to function age-appropriately requires DDS adjudicators to make a series of judgments, which we believe raises questions about the consistency and reliability of DDS decisions. SSA and IG studies and our analysis document problems throughout the IFA process, especially for mental impairments. (See app. III for a more detailed discussion of the problems that SSA and the IG identified.) Extensive evidence needed: To make disability determinations, DDSs use information from both medical and nonmedical sources, including teachers, day care providers, parents, and others knowledgeable about the child’s day-to-day behavior and activities. For the functional assessment, observations are needed about the child’s behavior over a long period of time, so evidence-gathering can be a considerable task. SSA found in its 1994 study that the lack of sufficient supporting documentation was the most common problem in its sample of childhood disability decisions. School officials in particular are an important source of nonmedical data on children’s behavior over time. Each DDS develops its own questionnaires for eliciting the data, and inquiries are made on virtually every applicant because this information is also used to assess functioning under the medical listings. We estimate that the process now results in about 500,000 inquiries to schools each year, a substantial reporting burden. Some parties believe that the open-ended questionnaire design in many states and the burden on school officials faced with many inquiries may be contributing to poor quality data from this key source. Difficulty classifying limitations: If an IFA is needed, a disability adjudicator must classify the child’s limitations in the appropriate areas of functioning, as shown in figure 4. This is a complex judgment because some areas are closely interrelated and impairments may or may not affect functioning in more than one area. If, for example, evidence indicates that a child gets in fights at school, the adjudicator must determine whether the specific behavior is evidence of a limitation in social skills, personal and behavioral skills, or some combination of these. SSA found that in cases of incorrect awards a common mistake that adjudicators made was to count the effect of an impairment in two areas when only one was appropriate. This resulted in the impairment seeming more severe than it actually was. Problems defining degrees of limitation: Once the areas have been identified, the adjudicator must judge the degree of limitation. Because only certain conditions—such as low intelligence quotient (IQ)—can be objectively tested and determined, SSA has defined the severity of limitations by comparison with expected behavior for the child’s chronological age. Figure 4 shows the degrees of limitation adjudicators use to assess children 3 through 15 years old. SSA’s guidance defines a limitation in the moderate category as more than a mild or minimal limitation but less than a marked limitation. The terms “mild” and “minimal” are not defined, but SSA guidance describes an impairment in the marked category as one that “seriously” interferes with a child’s ability to function age-appropriately, while a moderate limitation creates “considerable” interference. Within each category, adjudicators are expected to be able to differentiate the degree of limitation. For example, a moderate rating can range from a “weak moderate” (just above a less-than-moderate) up to a “strong moderate” (just below a marked limitation). Limited guidance for summing the result: Because the IFA process is inherently subjective, SSA cannot provide an objective procedure for summarizing the IFA results. Therefore, SSA instructs adjudicators to step back and assess whether the child meets the overall definition of disability. As an example to guide adjudicators, SSA has said that an award may generally be granted if a child has a moderate limitation in three areas. However, SSA officials stress that this statement assumes “three good, solid moderates,” and they characterize it as a general guideline, not a firm rule. Also, they stress that other possible combinations of ratings, such as two strong moderates, could justify finding a child disabled, depending on the individual child’s circumstances. In the end, officials stress that adjudicators are expected to award or deny benefits based on an overall judgment, not on any specific sum of severity ratings. SSA’s 1994 study of 325 childhood awards highlighted the difficulties in using the IFA process to reliably identify disabled children, particularly children with behavioral and learning disorders. In the study, SSA’s Office of Disability selected cases of 325 children with behavioral and learning disorders who had been found eligible. The majority were found eligible based on IFAs. These cases had been decided by DDS adjudicators, based on their understanding of existing guidance from SSA. Then, SSA’s regional quality assurance staff had reviewed the decisions and found them accurate. The study involved a third group of experts in the Office of Disability who reviewed the same cases and found inaccuracies in the decisions. Based on their findings, we concluded that about 13 percent of the awards reviewed by SSA had been made to children who were not impaired enough to qualify. Also, another 23 percent of the awards had been made without sufficient supporting documentation. A January 1995 IG report focused on IFA-based awards to children with mental impairments. IG staff, with assistance from the Office of Disability, reviewed 129 IFA-based awards for mental retardation, attention deficit hyperactivity disorder, and other behavioral or learning disorders. The IG found that 17 (13 percent) of the awards should have been denials and another 38 (29 percent) had been based on insufficient evidence. The IG attributed this to DDS adjudicators’ difficulty interpreting and complying with SSA’s IFA guidelines for assessing the severity of children’s mental impairments. Many adjudicators reported that they found the SSA guidelines unclear and not sufficiently objective. The IG stated that this group of children had less severe impairments than those children determined disabled based on the medical listings, making the assessment of their impairments’ effect on their ability to function age-appropriately more difficult. We observed firsthand the difficulty that adjudicators face in making the judgments required by the IFA process for children who have behavioral and learning disorders. In June 1994, we attended 1-day training sessions for DDS adjudicators and SSA’s regional quality assurance staff from across the nation. The Office of Disability presented the findings from its 1994 study and discussed the policies and procedures that DDS and quality assurance staff had misapplied. In this training, Office of Disability staff presented case studies of children included in the 1994 study. After those in attendance reviewed the evidence for each child’s case, they were asked to assess the degree to which the child’s impairment limited his or her functioning. The attendees’ opinions were tallied and in all cases they were split. During discussions of each case, attendees often voiced differing views on why they believed, for example, that the child’s limitation was less than moderate or moderate, or whether a moderate limitation was a good, solid moderate, or a weak moderate. In some cases, the opinion of the majority of attendees turned out to be different from the conclusion of the Office of Disability. In addition to the national training in June 1994, SSA took other steps to correct implementation problems, including (1) issuing numerous instructional clarifications and reminders, (2) requiring DDSs to specially code certain types of mental impairments and all decisions based on three moderate limitations (to facilitate selecting samples of cases for further study), and (3) establishing more rigorous requirements for documenting awards that are based on three moderate limitations. The Office of Disability plans to do a follow-up study to assess the effectiveness of its remedial efforts. Some experts believe that further steps could be taken to improve the IFA process. For example, experts we contacted commented on the need for more complete longitudinal evaluations by professionals. They pointed out that more complete examinations—sometimes including multiple visits and observations of both parents and children—would help to address concerns about the adequacy of information from schools and medical sources and provide higher assurance of good decisions. They stated that because professionals are trained to identify malingering in mental examinations, the expanded examinations might also help relieve concerns about coaching. They agreed that such examinations would raise the program’s administrative costs considerably, but because a child can receive almost $5,500 a year in benefits (which can continue for life) they believed that the costs would be justified. SSA’s efforts and experts’ suggestions are geared toward improving the process rather than addressing the underlying conceptual problems with the IFA. The difficulties so far in implementing the IFA bring into question whether these types of incremental actions can ensure consistently accurate decisions for children with mental impairments, especially behavioral and learning disorders. The rapid growth in awards to children with mental impairments—particularly behavioral and learning disorders—has contributed to the public perception that the SSI program for children is vulnerable to fraud and abuse. The media have reported allegations that parents coach their children to fake mental impairments by misbehaving or performing poorly in school so that they can qualify for SSI benefits. Critics believe that cash payments and Medicaid act as incentives for some parents to coach and, therefore, they are concerned about the extent to which parents can manipulate the disability determination process. However, we believe that measuring the extent to which coaching may actually occur is extremely difficult. Unless parents admit to it, coaching is almost impossible to substantiate. The nature of the parent-child relationship makes investigating coaching allegations difficult. Many communications between parent and child take place at home, out of the view of outside observers. In addition, the variability of children’s behavior makes knowing whether a child’s behavior is the result of coaching difficult. Behavior can vary naturally among children of the same age—or in the same child over time—as they go through stages in development or respond to changes in their home or school environment. If a child started misbehaving in school, investigators would need baseline evidence to establish that the child had not misbehaved extensively in the past. Finally, even if investigators could identify a sudden change in behavior, they would have to rule out other reasons for the change, such as changes in the child’s household or neighborhood environment. In short, knowing whether the child is performing poorly or misbehaving because of coaching or for other reasons is difficult. Because coaching is difficult to detect, the extent of coaching cannot be measured with much confidence. In recent studies, SSA and the HHS IG reviewed case files and identified scant evidence of coaching or malingering. In the rare instances where they found evidence of possible coaching or malingering, most of the claimants had been denied benefits anyway. (App. III summarizes the results of the SSA and IG studies, including their scopes and methodologies.) To protect program integrity, SSA has taken several steps to help provide assurance that the process can detect coaching or malingering and then make the appropriate eligibility determination. In June 1994, SSA began requiring DDSs to report to SSA’s regional quality assurance units any case with an allegation or suspicion of coaching. Such cases include those in which teachers, physicians, or psychologists indicate that (1) the child’s behavior was atypical of the child’s customary school behavior, (2) the child was uncooperative during testing, or (3) the child’s behavior deteriorated without explanation during the 6-month period preceding the application. According to SSA, its regional quality assurance units review all alleged cases of coaching. As of mid-January 1995, DDSs nationwide had reported alleged coaching in 674 childhood cases—or less than one-half of 1 percent of all childhood applications filed during the period—and fewer than 50 of these children had been awarded benefits. Along with this new requirement, in August 1994, SSA required DDSs to send applicants’ schools a set of questions specifically designed to elicit the teacher’s views on whether the child had been coached. Additionally, each SSA regional office has established toll-free telephone numbers for the exclusive use of teachers and school officials to notify the regional quality assurance unit of coaching allegations. In mid-November 1994, SSA instructed DDSs to begin distributing these toll-free numbers to schools. Also, SSA has instructed its field offices and telephone service centers to report to the regional quality assurance units any allegations of coaching received from the general public. As of mid-January 1995, from all of these sources, SSA had received a total of 42 telephone calls with allegations of coaching involving 54 individuals. According to SSA, each allegation from teachers, school officials, or the general public is reviewed if the child was awarded benefits. Childhood disability decisions based on the IFA process are among the toughest that DDSs must make. Particularly in assessing behavioral and learning disabilities, the level of judgment required makes the IFA process difficult to administer consistently. Moreover, the high level of subjectivity leaves the process susceptible to manipulation and the consequent appearance that children can fake mental impairments to qualify for benefits. Indeed, the rise in allegations of coaching may reflect public suspicion of a process that has allowed many children with less severe impairments to qualify for benefits. Although scant evidence exists to substantiate that coaching is a problem, coaching cannot be ruled out and its extent is virtually unmeasurable. We believe that a more fundamental problem than coaching is determining which children are eligible for benefits using the new IFA process. Our analysis documents the many subjective judgments built into each step of the IFA process to assess where a child’s behavior falls along the continuum of age-appropriate functioning. Moreover, studies by SSA and the IG of children awarded benefits for behavioral and learning disorders illustrate the difficulties that SSA has experienced over the last 4 years in making definitive and consistent eligibility decisions for children with these disorders. SSA’s efforts have been aimed at process improvements rather than reexamining the conceptual basis for the IFA. Despite its efforts, too much adjudicator judgment remains. Although better evidence and more use of objective tests where possible would improve the process, the likelihood of significantly reducing judgment involved in deciding whether a child qualifies for benefits under the IFA is remote. We believe that more consistent decisions could be made if adjudicators based functional assessments of children on the functional criteria in SSA’s medical listings. This change would reduce the growth in awards and target disability benefits toward children with more severe impairments. Given widespread concern about growth in the SSI program for children and in light of our findings about the subjective nature of the IFA process, the Congress could take action to improve eligibility determinations for children with disabilities. One option the Congress could consider is to eliminate the IFA, which would require amending the statute. The Congress could then direct SSA to revise its medical listings, including the functional criteria, so that all children receive functional assessments based on these revised criteria. We did not request official agency comments from SSA on a draft of this report. However, we discussed the draft with SSA program officials, who generally agreed that we had accurately characterized the IFA process and the results of studies. SSA officials had some technical comments, which we have incorporated where appropriate. Please contact me on (202) 512-7215 if you have any questions about this report. Other major contributors are Cynthia Bascetta, Ira Spears, Ken Daniell, David Fiske, and Ellen Habenicht. To develop the information in this report, we (1) reviewed SSA’s childhood disability program policies, procedures, and records, and discussed the IFA process with SSA program officials on the national, regional, and local level; (2) interviewed officials in state DDSs; (3) reviewed SSA’s report on its 1994 study of children with behavioral and learning disorders; and (4) attended a June 1994 SSA training course that was based on findings from its study. We also discussed eligibility issues with officials of HHS’ IG, which recently issued two reports on the SSI childhood disability program. To develop SSI childhood program award rate data, we obtained SSA’s computerized records on the results of initial determinations and reconsideration disability decisions made by DDSs for children under 18 years old from 1988 through September 1994. These records exclude the results of disability decisions made by administrative law judges. From these records, we determined (1) the overall award rate for children, (2) the percentage of IFA awards that were based on mental impairments versus physical impairments, (3) the average monthly number of childhood applications, and (4) the average monthly number of awards that were based on IFAs versus medical listings. These data, as applicable, were determined for the following periods: (1) 2 years before the Supreme Court’s Sullivan v. Zebley decision (Jan. 1, 1988, through Feb. 20, 1990); (2) 2 years after the IFA process was implemented (Feb. 11, 1991, through Dec. 31, 1992); (3) January-December 1993; and (4) January- September 1994. Because no IFA process existed before the Zebley decision, no pre-Zebley awards were decided based on IFAs. We excluded children who had applied during 1988 through February 10, 1991, from the universe of children on whom decisions were made from February 11, 1991, through September 30, 1994. We did this to minimize the extent to which data in these comparison periods reflect the result of cases readjudicated as part of the settlement in the Zebley class action lawsuit. We were not able to identify or exclude Zebley classmembers for whom benefits had been denied or terminated from 1980 through 1987 from any of the comparison periods. According to SSA, Zebley classmembers are more likely to have physical impairments than the general population of new SSI child applicants. We performed our work from May 1994 through February 1995 in accordance with generally accepted government auditing standards. One month before SSA issued regulations implementing the new IFA process, the Zebley plaintiff’s counsel submitted interrogatories to SSA asking, among other things, why nine DDSs with the lowest award rates for children had such low award rates. SSA regional officials were tasked with answering some of the counsel’s interrogatories and, in some instances, the officials informed the states that they were the subject of the counsel’s inquiry. Also, from time to time thereafter, SSA officials shared state-by-state award rate data with state DDSs. Some SSA regional officials stated that they believed some DDSs could have felt pressured to increase their award rates. In the month that SSA issued regulations implementing the new IFA process, a federal district court ordered SSA to perform special quality assurance reviews of disability applications denied under the new regulations. The court order required SSA to do quality assurance reviews of denials made by 10 state DDSs that, according to SSA, Zebley plaintiff’s counsel had identified as denial prone due to their low award rates. Based on its own studies, SSA had argued before the court that low award rates were not reliable indicators of whether special corrective action was needed to avoid incorrect denials, but the court required SSA to implement the special quality assurance reviews for these 10 states. Under the court order, during the first month after the new regulations were in effect, SSA had to review the lesser of 100 or all denials for each denial-prone state. SSA reviewed only 25 denials for other states. A subsequent March 1991 court order required SSA, after the first month, to review at least 1,000 denials per month nationwide. SSA’s sample of 1,000 denials included 15 percent of the denials from each of the 10 denial-prone states. By memorandum in February 1991, SSA informed all DDSs of the special quality assurance requirements and identified the 10 states that had been classified as denial prone. The court order required that SSA send the results of the quality assurance reviews monthly to the Zebley plaintiff’s counsel. The Zebley plaintiff’s counsel wrote to the SSA Commissioner citing a “disturbing pattern” of low allowance rates in eight states and asked the Commissioner to take remedial steps. In a newsletter to legal aid societies, the Zebley counsel listed 13 DDSs whose cumulative allowance rates were at 50 percent or below. The counsel encouraged legal aid society representatives in those states to contact the DDS directors and “confront them with their sub-par performance.” SSA considers behavioral and learning disorders to be the most susceptible to coaching and malingering. In 1994, SSA’s Office of Disability in Baltimore reviewed a national sample of 617 school-age children who had applied due to behavioral and learning disorders. Because the sample was small, the findings of the study cannot be projected to the universe of childhood disability claims or to the subset of specific impairments studied. The 617 children were selected from those who had applied due to such impairments as attention deficit disorder, attention deficit hyperactivity disorder, personality disorder, conduct disorder, learning disorder, oppositional defiant disorder, anxiety disorder, developmental delay, behavior disorder, speech and language disorders, borderline intellectual functioning, and adjustment disorder. According to SSA, these types of disorders constitute about 20 percent of all childhood disability applications. SSA excluded cases involving extremely severe mental disorders, such as psychotic disorders and mental retardation. SSA selected the 617 cases from final DDS decisions that SSA’s regional quality assurance staff had already reviewed for accuracy. The 617 cases in the sample consisted of 325 awards and 292 denials that DDSs adjudicated during October 1992 through July 1993. SSA reviewed case file documentation for the 617 cases. In its review of case file documentation, SSA considered coaching to be involved in any claim in which the child reported or an information source suspected that the parent or other caregiver had told the child to act or respond in a manner that would make the child appear more functionally limited than he or she actually was. In addition, SSA looked for evidence indicating that the child had malingered; that is, deliberately provided wrong information or did not put forth his or her best effort during testing. SSA found only 13 cases that showed any evidence of possible coaching or malingering, and only 3 of these cases were awards. In all cases, the evidence indicating possible coaching was provided by medical professionals or psychologists who performed consultative examinations for SSA. None of the evidence indicating possible coaching or malingering was provided by schools. The three questioned awards involved children who may have malingered during IQ testing. In these cases, however, the awards were based on factors other than the results of the testing. For example, one child with an oppositional defiant disorder appeared to malinger during IQ testing administered by a consultative examiner, but the award was based on other problems stemming from the disorder, not the results of the testing. Of the 325 awards reviewed by SSA, SSA found that 8.6 percent (28) should have been denials and another 27.7 percent (90) should not have been made without obtaining more supporting documentation. We asked SSA, based on experience in its quality assurance program, to estimate how many of the 90 cases with insufficient documentation would have been denials if all documentation had been obtained, and SSA estimated that 13 (or 4 percent of the 325 awards) would have been denials. Thus, we concluded that a total of 41 awards (12.6 percent of the 325 awards) should have been denials. By contrast, of 292 denials reviewed in the study, SSA found that only 1.4 percent (4) should have been awards, and another 1.4 percent (4) should not have been made without obtaining more supporting documentation. Combining all decisional and documentational errors for the 617 denials and awards in SSA’s study, the overall error rate for this group of cases was 20.4 percent. This is about twice the maximum acceptable error rate of 9.4 percent that SSA allows for decisional and documentational errors combined for all initial disability decisions made by an individual DDS. According to SSA’s Office of Disability, a primary reason that DDSs made awards that should have been denials was that DDSs had frequently overrated—but rarely underrated—the severity of children’s functional limitations. Such overrating occurred primarily because DDSs had (1) compared the child with the perfect child rather than the average child, (2) based the limitation on a single incident rather than behavior over time, (3) not considered the child’s ability to function while on an effective medication regimen, and (4) based the limitation on the child’s life circumstances rather than the effects of a medically determinable impairment. DDSs also had mechanically applied SSA’s guidelines on how to make awards using the results of the IFA process. SSA’s guidelines instruct DDSs that they generally should award benefits to children who have moderate limitations in any three of the areas of ability assessed in the IFA process. SSA found, however, that DDSs had used this instruction as a rule rather than a guideline. DDSs had automatically made awards to any child with three moderate limitations, regardless of how strong or weak the moderate limitations were. SSA stated that its guideline assumed “three good, solid moderates.” SSA found that, when DDSs had identified two moderate limitations, they sometimes made special attempts to find a third moderate limitation even though the evidence did not support it. DDSs had also “double-weighed” the effects of impairments in more than one of the areas of ability assessed in the IFA process, making the impairment seem more severe and pervasive than it actually was. For example, in some cases children displayed a lack of self-control by exhibiting more than one inappropriate behavior, such as fighting, aggressive behavior, disrespectful behavior, lying, oppositional behavior, and stealing. Although all these behaviors should have been rated only in the personal/behavioral area, DDSs had rated some behaviors in the personal/behavioral area and others in the social abilities area, giving the child moderate limitations in two areas rather than only one. This meant that the child needed only one more moderate limitation to have the three moderate limitations needed for approval. SSA also found that DDSs had sometimes based decisions on old evidence when current evidence indicated children had improved and that DDSs had sometimes assessed limitations that could not be attributed to medical impairments. As the IG reported in January 1995, IG staff reviewed the case files for a sample of 553 children whose applications were adjudicated by DDSs in 1992. Of the 553 children, 298 had been awarded benefits by 10 DDSs—Connecticut, Illinois, Kentucky, New York, North Carolina, North Dakota, Pennsylvania, South Dakota, Vermont, and Wisconsin. The remainder of the 553 cases consisted of a nationwide sample of 255 denials. Of the 298 awards, 129 (43 percent) had been decided based on an IFA, and 195 of the 255 denials (76 percent) had been decided based on an IFA. The IG targeted its study at cases involving mental retardation, attention deficit hyperactivity disorder, and other learning and behavioral disorders. Based on its review of these cases, IG officials told us that they had found no evidence of coaching. As the IG reported, when the IG staff had questions about the accuracy of a DDS disability determination or about the sufficiency of the evidence supporting a determination, the IG provided the case file to SSA’s Office of Disability in Baltimore—the same staff responsible for conducting SSA’s study of 617 childhood disability claims. The Office of Disability reviewed the accuracy of each of the questioned cases. The IG staff also visited the 10 DDSs to obtain their opinions on the adequacy of the SSA guidelines used to make disability determinations. Of the 129 awards reviewed that were based on IFAs, the IG reported that 17 (13 percent) should have been denials and another 38 (29 percent) were based on insufficient evidence. The IG attributed this problem to DDSs having difficulty in interpreting and complying with SSA guidelines for obtaining and evaluating evidence concerning the severity of the mental impairments of children on whom IFAs are conducted. The IG stated that these children have less severe impairments than those children determined to be disabled based on the impairment listing, making the assessment of the effects of their impairments on their ability to function age-appropriately more difficult. In discussions with employees of the 10 DDSs, the IG reported that many expressed concern that the SSA guidelines for determining disability for children with mental impairments were not sufficiently clear or objective. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the effects of the judicially mandated individualized functional assessment (IFA) process on Supplemental Security Income (SSI) benefits, focusing on: (1) allegations that parents may be coaching their children to fake mental impairments to qualify under the lower eligibility standards created by IFA; and (2) how IFA affects the children's eligibility for benefits. GAO found that: (1) the judicial decision that required changes in IFA essentially made the process for determining disability in children analogous to the adult process; (2) the new process assesses how children's impairments limit their ability to act and behave like unimpaired children of similar age; (3) it has become important to obtain evidence of disability from nonmedical sources as part of the children's assessment; (4) although the court required a new type of assessment for disabled children, it did not define the degree of limitation necessary to qualify for SSI benefits; (5) before the IFA process was introduced in 1991, the national award rate for all types of childhood cases was 38 percent, but the award rate jumped to 56 percent in the first 2 years after IFA regulations were issued; (6) the non-medical aspects of the IFA evaluation rely heavily on adjudicator judgment; (7) while the Social Security Administration (SSA) has attempted to improve the process, and thereby reduce fraud and improve accuracy in awards, IFA has an underlying conceptual problem; (8) although the IFA process attempts to improve accuracy, the presence of coaching by parents is almost impossible to detect; and (9) more consistent eligibility decisions could be made if adjudicators based functional assessments of children on the functional criteria in SSA medical listings. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Since 2003, the United States has provided about $19.2 billion to develop the Iraqi security forces, first under the Iraq Relief and Reconstruction Fund (IRRF) and later through the Iraq Security Forces Fund (ISFF). DOD has apportioned about $2.8 billion in ISFF funds to purchase and transport equipment to Iraqi military and police forces. DOD does not report IRRF funds for Iraqi forces’ equipment and transportation as a separate line item. DOD has requested an additional $2 billion to develop Iraqi security forces in the fiscal year 2008 Global War on Terror budget requests. The United States restructured the multinational force and increased resources to train and equip the Iraqi forces after they collapsed during an insurgent uprising in the spring of 2004. This collapse ensued when MNF-I transferred security responsibilities to the Iraqi forces before they were properly trained and equipped to battle insurgents. Iraqi security forces include the Iraqi Army, Navy, and Air Force under the Ministry of Defense and the Iraqi Police, National Police, and Border Enforcement under the Ministry of Interior. The train-and-equip program for Iraq operates under DOD authority and is implemented by MNF-I’s major subordinate commands, including MNSTC- I and Multinational Corps-Iraq (MNC-I) (see fig. 1). This differs from traditional security assistance programs, which operate under State Department authority and are managed in country by the DOD under the direction and supervision of the Chief of the U.S. Diplomatic Mission. MNSTC-I was established in June 2004 to assist in the development, organization, training, equipping, and sustainment of Iraqi security forces. MNC-I is responsible for the tactical command and control of MNF-I operations in Iraq. MNC-I’s major subordinate commands were responsible for distributing equipment to some Iraqi security forces in 2003 and 2004. As of July 2007, DOD and MNF-I had not specified which DOD equipment accountability procedures, if any, apply to the train-and-equip program for Iraq. Congress funded the train-and-equip program for Iraq under IRRF and ISFF but outside traditional security assistance programs, which, according to DOD officials, allowed DOD a large degree of flexibility in managing the program. DOD defines accountability as the obligation imposed by law, lawful order or regulation accepted by an organization or person for keeping accurate records, to ensure control of property, documents or funds, with or without physical possession. DOD officials stated that, since the funding did not go through traditional security assistance programs, the DOD accountability requirements normally applicable to these programs—including the registration of small arms transferred to foreign governments—did not apply. Further, MNF-I does not currently have an order or orders comprehensively specifying accountability procedures for equipment distributed to Iraqi military forces under the Ministry of Defense, according to MNSTC-I officials. According to DOD officials, because Iraq train-and-equip program funding did not go through traditional security assistance programs, the equipment procured with these funds was not subject to DOD accountability regulations that normally apply in the case of these programs. For traditional security assistance programs, DOD regulations specify accountability procedures for storing, protecting, transporting, and registering small arms and other sensitive items transferred to foreign governments. For example, the Security Assistance Management Manual, which provides guidance for traditional security assistance programs, states that the U.S. government’s responsibility for equipment intended for transfer to a foreign government under the Foreign Military Sales program does not cease until the recipient government’s official representative assumes final control over the items. Other regulations referenced by the Security Assistance Management Manual prescribe minimum standards and criteria for the physical security of sensitive conventional arms and require the registration of small arms transferred outside DOD control. During our review, DOD officials expressed differing opinions about whether DOD regulations applied to the train-and-equip program for Iraq. For example, we heard conflicting views on whether MNF-I must follow the DOD regulation that requires participants to provide small arms serial numbers to a DOD-maintained registry. Although DOD has not specified whether this regulation applies, MNSTC-I began to consolidate weapons’ serial numbers in an electronic format in July 2006 and provide them to the DOD-maintained registry, according to MNSTC-I officials. Moreover, MNF-I issued two orders in 2004 to its subordinate commands directing steps to account for all equipment distributed to Iraqi security forces, including military and police. Although these orders are no longer in effect and have not been replaced, they directed coalition forces responsible for issuing equipment to the Iraqi security forces to record the serial numbers of all sensitive items such as weapons and radios, enter relevant information onto a Department of the Army hand receipt, and obtain signatures from the Iraqi security official receiving the items, among other tasks. Army regulations state that hand receipts maintain accountability by documenting the unit or individual that is directly responsible for a specific item. According to a former MNSTC-I official, hand receipts are critical to maintaining property accountability. However, the orders did not require the consolidation of all records for equipment distributed by the coalition to the Iraqi security forces. According to officials in the MNSTC-I Office of the Staff Judge Advocate, although these orders were valid when they were issued in 2004, they are no longer in effect. In addition, these orders have not been replaced with a comprehensive order or orders that address the equipment distributed to Iraqi security forces, according to MNSTC-I officials. For forces under the Ministry of Interior, MNF-I issued two new orders in December 2005 to address the problem of limited records for equipment distributed to Ministry of Interior forces. Among other guidance, the orders established accountability procedures for equipment MNC-I and MNSTC-I distribute to Ministry of Interior forces, such as Iraqi police and national police. In addition, MNF-I issued other orders related to some types of equipment. However, according to MNSTC-I officials, MNF-I has not issued an order or orders that address the accountability of all equipment distributed by coalition forces to Iraqi military forces under the Ministry of Defense. Two factors led to DOD’s lack of full accountability for the equipment issued to Iraqi security forces (see fig. 2). First, until December 2005, MNSTC-I did not maintain a centralized record of all equipment distributed to Iraqi security forces. Second, MNSTC-I has not consistently collected supporting documents that confirm the dates the equipment was received, the quantities of equipment delivered, or the Iraqi units receiving the equipment. First, until December 2005, no centralized set of records for equipment distributed to Iraqi security forces existed. MNSTC-I did not consistently collect equipment distribution records as required in the property accountability orders for several reasons. The lack of a fully operational network to distribute the equipment, including national and regional level distribution centers, hampered MNSTC-I’s ability to collect and maintain appropriate equipment accountability records. According to former MNSTC-I officials, a fully operational distribution network was not established until mid-2005, over 1 year after MNF-I began distributing large quantities of equipment to the Iraqi security forces. In addition, staffing weaknesses hindered the development of property accountability procedures, according to former MNSTC-I and other officials. For example, according to the former MNSTC-I commander, several months passed after MNSTC-I’s establishment before the command received the needed number of staff. As a result, MNSTC-I did not have the personnel necessary to record information on individual items distributed to Iraqi forces. Further, according to MNSTC-I officials, the need to rapidly equip Iraqi forces conducting operations in a combat environment limited MNSTC-I’s ability to fully implement accountability procedures. Our analysis of MNSTC-I’s property book system indicates that MNSTC-I does not have complete records confirming Iraqi forces’ receipt of the equipment, particularly for Iraqi military forces. MNSTC-I established separate property books for equipment issued to Iraq’s security ministries—the Ministry of Defense and Ministry of Interior—beginning in late 2005. At that time, they also attempted to recover past records. MNSTC-I officials acknowledge that the property books did not contain records for all of the equipment distributed and that existing records were incomplete or lacked supporting documentation. We identified discrepancies between data reported by the former MNSTC-I commander and MNSTC-I property book records (see fig. 3). Although the former MNSTC-I commander reported that about 185,000 AK-47 rifles, 170,000 pistols, 215,000 items of body armor, and 140,000 helmets were issued to Iraqi security forces as of September 2005, the MNSTC-I property books contain records for only about 75,000 AK-47 rifles, 90,000 pistols, 80,000 items of body armor, and 25,000 helmets. Thus, DOD and MNF-I cannot fully account for about 110,000 AK-47 rifles, 80,000 pistols, 135,000 items of body armor, and 115,000 helmets reported as issued to Iraqi forces as of September 22, 2005. Our analysis of the MNSTC-I property book records found that DOD and MNF-I cannot fully account for at least 190,000 weapons reported as issued to Iraqi forces as of September 22, 2005. The second factor leading to the lapse in accountability is MNSTC-I’s inability to consistently collect supporting documents that confirm when the equipment was received, the quantities of equipment delivered, and the Iraqi units receiving the equipment. We requested and received a sample of documents confirming equipment received by Iraqi units during specific weeks in February, April, July, and November 2006. Due to the limited number of these records, we cannot generalize the information across all of MNSTC-I records. Our preliminary review of this sample found that in the period prior to June 2006, MNSTC-I provided only a few supporting documents confirming that Iraqi units had received the equipment. For the period after June 2006, we found that MNSTC-I possessed more supporting documents. According to MNSTC-I officials who rotated in country in June 2006, the command began to place greater emphasis on collecting documentation of Iraqi receipt of equipment. However, MNSTC-I officials also stated that security constraints make it difficult for them to travel within Iraq and collect hard copies of all documentation. They depend instead on warehouse staff to send the receipts via scanner, fax or computer. Furthermore, the property books consist of extensive electronic spreadsheets—the January 2007 property book records for the Ministry of Defense contained 227 columns and 5,342 rows. Staff identify erroneous entries through periodic manual checks and report errors to the property book officer, according to MNSTC-I officials. Although MNSTC-I issued a draft Standard Operating Procedures handbook to help assigned personnel input data accurately and produce relevant reports, these procedures require multiple steps and could lead to the unintentional inclusion of incorrect data in calculations and reports, making them prone to error. MNSTC-I officials acknowledged they have identified numerous mistakes due to incorrect manual entries, which required them to find the original documentation to reverify the data and correct the entries. MNSTC-I officials also have acknowledged that the spreadsheet system is an inefficient management tool given the large size of the program, large amount of data, and limited number of personnel available to maintain the system. MNSTC-I plans to move the property book records from a spreadsheet system to a database management system by summer 2007. Complete and accurate records are an essential component of a property accountability system for equipment issued to Iraqi security forces. However, DOD and MNF-I cannot ensure that Iraqi security forces received the equipment as intended. DOD’s and MNF-I’s lack of clear and consistent guidance contributed to partial data collection in the field. Further, insufficient staffing, the lack of a fully developed network to distribute the equipment, and inadequate technology have hampered record keeping and data collection. Given DOD’s request for an additional $2 billion to develop Iraqi security forces, improving accountability procedures can help ensure that the equipment purchased with these funds reaches the intended recipients. In addition, adequate accountability procedures can help MNF-I identify Iraqi forces’ legitimate equipment needs, thereby supporting the effective development of these forces. To help ensure that U.S. funded equipment reaches the Iraqi security forces as intended, we recommend that the Secretary of Defense take the following two actions: Determine which DOD accountability procedures apply or should apply to the program. After defining the required accountability procedures, ensure that sufficient staff, functioning distribution networks, standard operating procedures, and proper technology are available to meet the new requirements. We provided a draft of this report to the Secretary of Defense for review and comment. We received written comments from the DOD that are reprinted in appendix II. DOD concurred with both of our recommendations and indicated that they are currently reviewing policies and procedures for equipment accountability to ensure that proper accountability is in place for the Iraqi train-and-equip program. DOD also indicated that it is important to ensure that proper staffing, financial management, property distribution, information management and communications systems are in working order. We are sending copies of this report to interested congressional committees. We will also make copies available to others on request. In addition, this report is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8979 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. This report (1) examines the property accountability procedures that the Department of Defense (DOD) and Multinational Force-Iraq (MNF-I) may have applied to the U.S. train-and-equip program for Iraq and (2) assesses whether DOD and MNF-I can account for the U.S.-funded equipment issued to Iraqi security forces. Our work focused on the accountability requirements for the transportation and distribution of U.S.-funded equipment and did not review any requirements relevant to the procurement of this equipment. We performed our work from March 2006 through July 2007 in accordance with generally accepted government auditing standards. To examine the laws and regulations that govern property accountability, we reviewed the relevant legislation that has appropriated funds to train and equip Iraqi security forces, pertinent DOD regulations, and applicable U.S. military orders. We interviewed officials from the Department of State and DOD, including the office of the Deputy Undersecretary of Defense for Logistics and Materiel Readiness; Defense Security and Cooperation Agency; the Defense Logistics Agency; Tank-automotive and Armaments Command; and Defense Reconstruction and Support Office. We also interviewed current and former officials from MNF-I, including Multinational Security Transition Command-Iraq (MNSTC-I), and Multinational Corps-Iraq (MNC-I). We reviewed MNF-I’s accountability procedures for the U.S.-funded equipment it has issued to the Iraqi security forces, and we reviewed documentation from and interviewed current and former officials with the U.S. Central Command, MNF-I, MNSTC-I, and MNC-I in Baghdad, Iraq; Tampa, Florida; Washington, D.C.; and Fort Leavenworth, Kansas. To provide our analysis on the amount of equipment reported by MNF-I as issued to the Iraqi security forces, we interviewed key officials to gain an understanding of the MNSTC-I property book data and information reported by the former MNSTC-I commander. To assess the reliability of the former MNSTC-I commander’s data, we compared the data to classified information and interviewed former MNSTC-I officials about their procedures for collecting the data. Although we could not fully determine the reliability and accuracy of these data, we determined that they were sufficiently reliable to make broad comparisons against the MNSTC-I property books and to assess major discrepancies between the two reports. In assessing the documents supporting the January 2007 MNSTC-I property books, we were limited by MNSTC-I’s inability to scan large amounts of these supporting paper documents and provide them to us electronically. We obtained a sample by requesting supporting documents for 1 week in each of the following months—February, April, July, and November of 2006 (a month in every quarter)—to develop a judgmental sample. In addition to the contact named above, Judy A. McCloskey (Assistant Director), Nanette J. Barton, Lynn Cothern, Martin De Alteriis, Mattias Fenton, Mary Moutsos, and Jason Pogacnik made significant contributions to this report. David Bruno, Monica Brym, and Brent Helt also provided assistance. | Since 2003, the United States has provided about $19.2 billion to develop Iraqi security forces. The Department of Defense (DOD) recently requested an additional $2 billion to continue this effort. Components of the Multinational Force-Iraq (MNF-I), including the Multinational Security Transition Command-Iraq (MNSTC-I), are responsible for implementing the U.S. program to train and equip Iraqi forces. This report (1) examines the property accountability procedures DOD and MNF-I applied to the U.S. train-and-equip program for Iraq and (2) assesses whether DOD and MNF-I can account for the U.S.-funded equipment issued to the Iraqi security forces. To accomplish these objectives, GAO reviewed MNSTC-I property books as of January 2007 and interviewed current and former officials from DOD and MNF-I. As of July 2007, DOD and MNF-I had not specified which DOD accountability procedures, if any, apply to the train-and-equip program for Iraq. Congress funded the train-and-equip program for Iraq outside traditional security assistance programs, providing DOD a large degree of flexibility in managing the program, according to DOD officials. These officials stated that since the funding did not go through traditional security assistance programs, the DOD accountability requirements normally applicable to these programs did not apply. Further, MNF-I does not currently have orders that comprehensively specify accountability procedures for equipment distributed to the Iraqi forces. DOD and MNF-I cannot fully account for Iraqi forces' receipt of U.S.-funded equipment. Two factors led to this lapse in accountability. First, MNSTC-I did not maintain a centralized record of all equipment distributed to Iraqi forces before December 2005. At that time, MNSTC-I established a property book system to track issuance of equipment to the Iraqi forces and attempted to recover past records. GAO found a discrepancy of at least 190,000 weapons between data reported by the former MNSTC-I commander and the property books. Former MNSTC-I officials stated that this lapse was due to insufficient staff and the lack of a fully operational distribution network, among other reasons. Second, since the beginning of the program, MNSTC-I has not consistently collected supporting records confirming the dates the equipment was received, the quantities of equipment delivered, or the Iraqi units receiving the items. Since June 2006, the command has placed greater emphasis on collecting the supporting documents. However, GAO's review of the January 2007 property books found continuing problems with missing and incomplete records. Further, the property books consist of extensive electronic spreadsheets, which are an inefficient management tool given the large amount of data and limited personnel to maintain the system. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Creosote is derived by distilling tar; the type of creosote most commonly used for wood-treating is manufactured from coal tar. Polycyclic aromatic hydrocarbons—chemicals formed during the incomplete burning of coal, oil, gas, or other organic substances—generally make up 85 percent of the chemical composition of creosote. EPA classifies some of the polycyclic aromatic hydrocarbons in creosote, such as benzo(a)pyrene, as probable human carcinogens. Some polycyclic aromatic hydrocarbons also may have noncarcinogenic health effects, such as decreased liver or kidney weight. From approximately the early 1910s to the mid-1950s, the Federal Creosote site was a wood-treatment facility. Untreated railroad ties were delivered to the site and, to preserve them, coal tar creosote was applied to the railroad ties at a treatment plant located on the western portion of the property (see fig. 1 for an illustration of the site). Residual creosote from the treatment process was discharged into two canals that led to two lagoons on the northern and southern parts of the site, respectively. After treatment, the railroad ties were moved to the central portion of the property, where excess creosote from the treated wood dripped onto the ground. The treatment plant ceased operations in the mid-1950s. During the late 1950s and early 1960s, the area where the treatment plant was formerly located was developed into a 15-acre commercial and retail property known as the Rustic Mall. Through the mid-1960s, other areas of the property, including the former canal, lagoon, and drip areas, were developed into a 35-acre residential neighborhood known as the Claremont Development, which was made up of 137 single-family homes that housed several hundred residents. Issues with creosote contamination at the site became apparent in April 1996, when the New Jersey Department of Environmental Protection (NJDEP) responded to an incident involving the discharge of an unknown thick, tarry substance from a sump located at one of the residences in the Claremont Development. Later, in January 1997, the Borough of Manville responded to complaints that a sinkhole had developed around a sewer pipe in the Claremont Development. Excavation of the soil around the sewer pipe identified a black, tar-like material in the soil. After an initial site investigation, EPA found contamination in both the surface and subsurface soils as well as in the groundwater beneath the site. In 1999, EPA placed the site on the NPL and divided it into three smaller units, called operable units (OU). OU1 consisted of the source contamination (free-product creosote) in the lagoon and canal areas of the Claremont Development. OU2 included other soil contamination in the Claremont Development, such as residually contaminated soil at properties over and near the lagoon and canal areas and the drip area of the former wood- treatment facility. OU2 also included contamination at a nearby day-care facility. OU3 included the Rustic Mall soil contamination as well as groundwater contamination throughout the site. EPA completed all major site cleanup work in November 2007, and the site was declared “construction complete” in March 2008. Ultimately, EPA performed cleanup activities on 93 of the 137 properties in the residential area as well as on the commercial portion of the site. EPA’s ongoing activities at the site include monitoring groundwater contamination, conducting 5-year reviews of contamination levels to ensure that the remedy remains protective of human health and the environment, and selling properties that EPA acquired during the remedial action. According to EPA officials, the agency could remove the site from the NPL as early as 2011; however, this decision will depend on the results of contamination monitoring at the site. Most Superfund sites progress through the cleanup process in roughly the same way, although EPA may take different approaches on the basis of site-specific conditions. After listing a site on the NPL, EPA initiates a process to assess the extent of the contamination, decides on the actions that will be taken to address that contamination, and implements those actions. Figure 2 outlines the process EPA typically follows, from listing a site on the NPL through deletion from the NPL. In the site study phase of the cleanup, EPA or a responsible party conducts a two-part remedial investigation/feasibility study (RI/FS) process. The first part of this process—the remedial investigation— consists of data collection efforts to characterize site conditions, determine the nature of the waste, assess risks to human health and the environment, and conduct treatability testing as necessary to evaluate the potential performance and cost of the treatment technologies that are being considered. During the second part of the RI/FS process—the feasibility study—EPA identifies and evaluates various options to address the problems identified through the remedial investigation. EPA also develops cleanup goals, which include qualitative remedial action objectives that provide a general description of what the action will accomplish (e.g., preventing contamination from reaching groundwater) as well as preliminary quantitative remediation goals that describe the level of cleanup to be achieved. According to EPA guidance, it may be necessary to screen out certain options to reduce the number of technologies that will be analyzed in detail to minimize the resources dedicated to evaluating less promising options. EPA screens technologies on the basis of the following three criteria: effectiveness: the potential effectiveness of technologies in meeting the cleanup goals, the potential impacts on human health and the environment during implementation, and how proven and reliable the technology is with respect to the contaminants and conditions at the site; implementability: the technical and administrative feasibility of the technology, including the evaluation of treatment requirements and the relative ease or difficulty in achieving operation and maintenance requirements; and cost: the capital and operation and maintenance costs of a technology (i.e., each technology is evaluated to determine whether its costs are high, moderate, or low relative to other options within the same category). After screening the technologies that it has identified, EPA combines selected technologies into remedial alternatives. EPA may develop alternatives to address a contaminated medium (e.g., groundwater), a specific area of the site (e.g., a waste lagoon or contaminated hot spot), or the entire site. EPA guidance states that a range of alternatives should be developed, varying primarily in the extent to which they rely on the long- term management of contamination and untreated wastes. In addition, containment options involving little or no treatment, as well as a no-action alternative, should be developed. EPA then evaluates alternatives using the nine evaluation criteria shown in table 1 and documents its selected alternative in a record of decision (ROD). Next, either EPA or a responsible party may initiate the remedial action that was documented in the ROD. Like the RI/FS, implementation of the remedial action is divided into two phases. The first phase is the remedial design, which involves a series of engineering reports, documents, and specifications that detail the steps to be taken during the remedial action to meet the cleanup goals established for the site. For EPA-led remedial actions, EPA may either select a private contractor to perform the remedial design or, under a 1984 interagency agreement with the Corps, assign responsibility for designing the remedial action to the Corps, which may select and oversee a private contractor to perform the design work. The second phase is the remedial action phase, where the selected remedy, as defined by the remedial design, is implemented. Similar to the design phase, for EPA-led remedial actions, EPA may either select a private contractor to perform the remedial action or assign the remedial action to the Corps, which would be responsible for contractor selection and oversight during the remedial construction. When physical construction of all remedial actions is complete and other criteria are met, EPA deems the site to be “construction complete.” Most sites then enter into an operation and maintenance phase, when the responsible party or the state maintains the remedy while EPA conducts periodic reviews to ensure that the remedy continues to protect human health and the environment. For example, at a site with soil contamination, the remedial action could be to build a cap over the contamination, while the operation and maintenance phase would consist of monitoring and maintaining the cap. Eventually, when EPA determines, with state concurrence, that no further remedial activities at the site are appropriate, EPA may delete the site from the NPL. The extent of the contamination in a residential area at the Federal Creosote site was the primary factor that influenced EPA’s risk assessment conclusions, remedy selection decisions, and site work priorities. EPA determined that risk levels were unacceptable given the site’s residential use. EPA then selected remedies for the site, taking into account space constraints and other challenges associated with a residential cleanup. Finally, EPA placed a high priority on scheduling and funding site work because the contaminated area was residential, thereby reaching key cleanup milestones relatively quickly. From the spring of 1997 to the summer of 2001, EPA conducted multiple rounds of sampling and risk assessment at the Federal Creosote site and concluded that human health risks exceeded acceptable levels. Specifically, EPA assessed the air, groundwater, surface soil, and subsurface soil as part of an initial site investigation and an RI/FS process. See appendix III for a timeline of EPA’s risk assessment activities. EPA’s initial investigation of site contamination, which began in 1997, included such efforts as assessing whether contamination was affecting public drinking water supplies; investigating the nature of the bedrock and the aquifer underlying the site; collecting soil samples from 30 properties selected on the basis of their proximity to the lagoons, canals, and drip area of the former wood-treatment facility; and collecting approximately 1,350 surface soil samples (up to 3 inches below the ground surface) from 133 properties in and near the residential development. From this initial investigation, EPA concluded that site contamination posed unacceptable human health risks. For example, while EPA found that contamination did not pose short-term health risks that could require an evacuation of residents, EPA found that the contamination was extensive and uncontrolled; had impacted soil, sediment, and groundwater in the area; and likely posed long-term health risks. For soil contamination in particular, EPA determined that, in some areas, the contamination was within 2 to 3 feet of the ground surface; in other areas, EPA found that the contamination was covered by little or no fill material. According to a site document, one resident had discovered a large amount of buried tar when installing a fence on his property. As a result of its concerns that surface soil contamination could pose a risk to residents, EPA developed a surface soil risk assessment in January 1999. EPA concluded that soil contamination levels at 27 properties in the residential area posed long- term human health risks, including carcinogenic or noncarcinogenic risks (or both), that exceeded acceptable levels. In addition to soil contamination, EPA’s initial investigation determined that creosote had contaminated groundwater in the soil as well as in fractures in the bedrock underlying the site, which was a potential source of drinking water. Furthermore, EPA’s aquifer investigation showed that groundwater from the site had the potential to influence the Borough of Manville’s municipal water supply wells, although Region 2 officials said the nature of the fractures made it difficult for EPA to determine whether site contamination would actually affect the wells. According to Region 2 officials, the purpose of a remedial investigation is to collect enough data to determine whether there is a need to take a remedial action. These officials said that an RI/FS for OU1 was not necessary because EPA had obtained much more information from its initial investigation on the extent of contamination at properties over the lagoon and canal source areas than is typically available to support taking an action. Also, according to EPA, the data that were collected during this initial investigation were equivalent in scope to that of a remedial investigation. Therefore, because EPA was trying to address the source contamination in the residential area on an expedited basis, the agency chose to incorporate these data into an Engineering Evaluation/Cost Analysis because it allowed EPA to evaluate remedial alternatives in a more streamlined way, as compared with an RI/FS report. However, for OU2 and OU3, EPA initiated an RI/FS process in 1998 to more fully characterize the extent of soil and groundwater contamination throughout the site. EPA’s OU2 soil evaluation determined that elevated levels of creosote contamination close to the surface in the residential area were generally found near the lagoons and canals, while the drip area generally had residual levels of contamination close to the surface. Underlying the site, EPA found that free-product creosote rested on a clay layer approximately 6 to 10 feet below the surface, although in some areas the layer was not continuous, and the creosote had migrated as deep as the bedrock, roughly 25 to 35 feet underground. On the basis of these findings, in April 2000, EPA developed a human health risk assessment for soil contamination in the residential area using a sample of six representative properties: two properties each represented the lagoon and canal areas, the drip area, and the remaining residential area, respectively. EPA found that soil contamination exceeded acceptable risk levels at the lagoon and canal and drip areas, but not at properties representing other areas of the Claremont Development. Furthermore, EPA’s OU3 soil analysis revealed that contamination was generally in three main areas of the mall, with several other “hot spots” of contaminated material. EPA also determined that most of the soil contamination was within the first 2 feet below the ground surface; however, in certain areas, contamination was as deep as 35 feet below the surface. EPA noted that it did not collect soil samples from under the mall buildings, although, according to a site document, EPA thought it likely that contamination remained under at least a portion of one of the buildings. EPA assessed the human health risks from exposure to soil contamination in June 2001. At the time of EPA’s assessment, OU3 was a commercial area. However, the Borough of Manville and the mall owner had indicated that the area could be redeveloped for a mixed residential/commercial use. Therefore, EPA evaluated risks for OU3 under both residential and commercial use scenarios, and found that risks exceeded acceptable levels for residential use at some areas of the mall and for commercial use at one area. Finally, EPA’s OU3 RI/FS investigation determined that contaminated groundwater in the soil above the bedrock had not migrated far from the original source areas of the lagoons and canals. However, free-product creosote had penetrated as deep as 120 feet into the fractured bedrock, and groundwater contamination in the bedrock had moved through the fractures toward two nearby rivers. On the basis of these results, in July 2001, EPA evaluated the potential human health risks from groundwater contamination to on-site and off-site residents (i.e., residents who lived on or near the site) and commercial workers, and found that risks for on-site residents and workers exceeded acceptable levels for carcinogenic and noncarcinogenic contaminants. The Department of Health and Human Services’ Agency for Toxic Substances and Disease Registry (ATSDR) also evaluated the risks from site contamination and published a series of studies that expressed concern about site contamination levels. Between May 1997 and February 1999, ATSDR published five health consultations that responded to EPA requests to answer specific questions, such as whether consuming vegetables grown in site soils posed a health threat. For example, ATSDR’s first consultation concluded that subsurface soil contamination levels posed a threat to residents if the contamination was dug up, or if similar levels of contamination were discovered in surface soils. Then, in September 2000, ATSDR published a public health assessment that evaluated site contamination and concluded that past and present exposures to surface soil (at that time) did not represent an apparent health hazard. However, the assessment also stated that this conclusion did not rule out the need for remedial action because subsurface contamination posed a long-term hazard if soil 2 feet below the ground in certain areas was disturbed. ATSDR and EPA officials told us that ATSDR’s conclusion that surface soil contamination did not pose a public health hazard did not mean that EPA’s action to remediate the site was unwarranted. In particular, officials from both agencies cited differences in the agencies’ risk assessment views and processes as a reason why they could reach alternative conclusions about site risks. For example, ATSDR officials indicated that ATSDR’s assessment focused on conditions in the first 6 inches of soil to evaluate what contamination exposures residents may have been subject to in the past and at the time of the assessment. However, the officials said that EPA’s risk assessment would have been more focused on the hypothetical situation where subsurface soil contamination is brought to the surface in the future. Therefore, the officials said that, in fact, ATSDR would have had very serious concerns if the site had not been remediated because of the potential for high levels of contamination in the subsurface soil to be brought to the surface through activities such as tree planting or house remodeling. ATSDR also had concerns about potential exposures to groundwater contamination. As a result, the officials stated that ATSDR’s assessment recommended that EPA continue its plans to implement a remedial action to remove source material from the site. On the basis of its conclusions about site risks, EPA set cleanup goals for different areas of the site that, when achieved, would reduce risks to acceptable levels for residential use. For example, EPA established site- specific qualitative objectives for its remedial actions, such as preventing human exposure to contamination, cleaning up areas of source contamination to allow for unrestricted land use and prevent future impacts to groundwater quality, and minimizing disturbance to residents and occupants of the Rustic Mall during a remedial action. EPA also developed quantitative remediation goals to identify the level at which remedial actions would need to be implemented to protect human health. According to site documents, there were no federal or state cleanup standards for soil contamination at the time of the cleanup effort. Therefore, EPA established risk-based remediation goals that would reduce excess carcinogenic risks to a level of 1 in 1 million, and that were consistent with New Jersey guidance for residential direct contact with soil. For the groundwater contamination, EPA used both federal and state chemical-specific standards to set risk-based remediation goals. According to site documents and Region 2 officials, risk levels required a remedial action regardless of the site’s future use. The officials said that EPA considered what level of waste could be left on-site while still allowing for unrestricted residential use of properties; however, they noted that, with unrestricted residential use, there is a very low threshold for the level of waste that can be left on-site. They said that even the residually contaminated soil was sufficiently contaminated that EPA dug between 10 and 14 feet deep to allow for unrestricted use of residents’ properties. Similarly, EPA determined that source material in the Rustic Mall needed to be remediated because of the potential future residential use of the site. According to a site document, EPA determined that, under a current use scenario (at the time of its risk assessment in 2001), there were likely no unacceptable human health risks from contamination under the mall because contaminants were covered by buildings and pavement. However, the contamination could be exposed if these co were removed during site redevelopment. Therefore, EPA identified the level of site cleanup required on the basis of the most conservative futu use scenario. To select remedies to address the soil and groundwater contamination at the Federal Creosote site, EPA identified potential remedial technologies from agency guidance as well as from other publications and databases that listed potentially available technologies. After identifying potential technologies, EPA screened out less viable technologies, combined selected technologies into remedial alternatives, evaluated the alternatives, and selected a preferred remedy for each OU. See appendix III for a timeline of EPA’s remedy selection efforts. Region 2 officials told us that, to identify technologies for site remediation, EPA identifies a range of technologies on a site-specific basis. According to agency guidance, EPA prefers three technologies for treating the type of soil contamination found at the Federal Creosote site: bioremediation— using microbes to degrade contaminants and convert them to carbon dioxide, water, microbial cell matter, and other products; low temperature thermal desorption (LTTD)—heating contaminated material to temperatures less than 1,000 degrees Fahrenheit to physically separate contaminants from soils; and incineration—heating contaminated material to temperatures greater than 1,000 degrees Fahrenheit to destroy contaminants. EPA also identified other technologies to cap, contain, excavate, extract, treat, or dispose of site soil or groundwater contamination, including a number of emerging or innovative technologies. For the soil contamination, the range of technologies EPA considered varied among the OUs at the site. During its remedy selection process for OU1, EPA primarily evaluated the three technologies preferred by agency guidance for soil contamination at wood-treatment sites. According to Region 2 officials, EPA considered a limited range of technologies for OU1 because, originally, the agency was evaluating whether it would need to evacuate residents to protect them from site contamination. Consequently, EPA conducted a more streamlined remedy selection process for OU1 to speed decision making. Alternatively, for OU2 (and later for OU3), EPA evaluated a wider range of technologies, including several emerging technologies. In addition, Region 2 officials stated that differences in the contamination between the OUs impacted the range of technologies considered. Specifically, the officials said that the OU1 material was the more sludge-like, free-product creosote, whereas the OU2 contamination might not have been visible. The officials noted that, with less contaminated soils, more treatment options might become viable, since some options that might have difficulty treating more highly contaminated material might successfully treat less contaminated material. However, while EPA considered a wider range of technologies for OU2 and OU3, in general, EPA screened out the emerging technologies in favor of those that were identified as preferred in its guidelines. Ultimately, EPA determined that off-site thermal treatment and disposal of the soil contamination would best achieve its cleanup goals and were consistent with residential use of the site. In implementing this remedy, EPA determined that it would need to purchase some houses—where contamination was inaccessible without demolishing the houses—and permanently relocate these residents, while residents in other houses would only need to be relocated temporarily. For the groundwater contamination, Region 2 officials said that EPA tried to determine how to clean up the contaminated groundwater in the fractured bedrock but ultimately concluded that none of the options would be effective; moreover, many of the options would be expensive and take a long time to implement. As a result, EPA determined that attenuation of the groundwater contamination over time, long-term monitoring, and institutional controls to prevent the installation of wells at the site would be the best alternative to address contamination in the fractured bedrock. To select this remedy, EPA invoked a waiver for technical impracticability, which allowed it to select an alternative that would not comply with requirements to clean up the groundwater to levels that would meet site cleanup goals. Region 2 officials stated that one of the presumptions EPA makes in using a waiver for technical impracticability is that it has put forth its best effort to remove source contamination. Therefore, according to the officials, on the basis of agency guidance, EPA needed to clean up the source material that was contaminating the groundwater to justify a waiver for technical impracticability. Moreover, the officials said that by removing the source material, EPA may have helped prevent the contaminated groundwater area from getting larger. Also, the officials said that, in their judgment, EPA’s action would help the contamination in the bedrock attenuate more quickly, although they were unable to quantify this impact. In selecting these remedies, EPA’s decisions were influenced by several challenges associated with a residential cleanup, including (1) space constraints that limited on-site implementation of actions, (2) a determination that some options would not achieve the site cleanup goals, and (3) concerns about some options’ community impacts. Space constraints. According to Region 2 officials, space constraints posed by the residential nature of the site limited EPA’s ability to remediate contamination on-site. For example, the officials said that soil contamination in the lagoons and canals was interspersed throughout the residential area. As a result of the lack of available open land and the residential nature of the site, a site document indicated that options for on- site treatment and disposal of excavated material were not considered for OU1. Also, while EPA considered on-site treatment technologies and alternatives for OU2 and OU3, Region 2 officials said EPA did not consider buying additional houses to create more open space. They said that once EPA determined that the majority of houses in the residential area could be saved, it tried to avoid demolishing as many homes as possible. The officials also noted that EPA could have placed a treatment facility in a corner of the Rustic Mall, but that the mall was still a functioning commercial area at the time EPA was selecting remedies. The mall was in the middle of the town, and, according to the officials, feedback from local citizens indicated that the community relied heavily on the mall. As a result, EPA did not formally consider taking over additional areas of the mall to create more open space as part of a remedial alternative. Region 2 officials acknowledged that, after EPA began the cleanup, the owner decided to demolish the mall. However, they stated that, when EPA made its remedy selection decisions, it did not have sufficient justification to purchase or demolish the mall. In particular, EPA Region 2 officials told us that the challenge of space constraints was a key factor in why EPA chose not to implement bioremediation or LTTD—two of EPA’s preferred remedies for treating creosote contamination—on-site. For example, the officials noted that bioremediation of excavated material on-site would have required a lot of space to store the material while it was being treated with microbes that would help degrade the contamination. Similarly, the officials said that there was not sufficient space to stockpile material for treatment using LTTD. That is, to operate an LTTD unit efficiently, the officials said that EPA would have needed to feed material into the unit constantly. However, they said doing so was not possible at the site because, while EPA might excavate 100 tons of soil on some days, on other days, EPA was unable to excavate as much since it needed to work by hand around residents’ houses. Given EPA’s inconsistent rate of excavation, the agency would have needed to stockpile material to ensure a constant flow into an LTTD unit. However, according to Region 2 officials, there was not enough space to stockpile contaminated material awaiting treatment, and, as a result, the officials estimated that EPA could have operated an on-site LTTD unit only 25 percent of the time, which they said would not have been cost-effective. Specifically, the officials said that it would take around 60,000 square feet for all of the operations associated with an LTTD unit. They noted that a space roughly this size was available in the northeast corner of the Rustic Mall. However, because of constraints, such as fire code access requirements for a bowling alley that bordered this area, the officials estimated that the total available space was actually only about 43,000 square feet. Also, EPA would have needed additional space for other facilities related to the cleanup. In addition, while EPA determined that bioremediation and LTTD could be used to treat contamination off-site, EPA found that they would be difficult to implement because of a lack of permitted commercial facilities. As a result, EPA relied on incineration because incineration facilities were the most readily available for off-site treatment of material from the site. Level of cleanup required. EPA had concerns about whether certain technologies would effectively treat contamination to required levels, given the residential nature of the site. For example, EPA determined it was unlikely that such technologies as bioremediation of contaminated material in place would achieve the agency’s soil remediation goals, because EPA was uncertain whether the bioremediation microbes could be distributed evenly in contaminated areas since some of the contamination was under residents’ homes. Region 2 officials also said it was unlikely that EPA could have achieved its cleanup goals using bioremediation because of the high levels of soil contamination at the site. They said that if contamination levels are high, the microbes introduced into the soil could be killed before they have a chance to degrade the contaminants. Moreover, because of the high contamination levels and treatment requirements at the site, the officials said they had concerns about the effectiveness of using LTTD. They stated that LTTD treats material using lower temperatures than incineration, and that it removes about 80 percent of the contamination each time material is passed through the unit. As a result, sometimes material must be treated multiple times before it meets residential standards. The officials indicated that this would have probably been the case with the Federal Creosote material because it was so highly contaminated. They said, given the nature of the contamination at the site, incineration was a more efficient method of treatment to achieve the agency’s remediation goals. While the high treatment levels required because of the residential nature of the site impacted EPA’s choices about individual soil remediation technologies, they also influenced decisions about whether to dispose of treated and untreated material on-site, or at an off-site location. According to Region 2 officials, if EPA disposed of excavated material on-site, the agency would have had to ensure, through treatment and testing, that the soil met residential standards. Consequently, the officials concluded that if EPA disposed of excavated material on-site, it would have had to treat and test the material more extensively than it did for off-site disposal. The officials said that only about 35 percent of the material excavated from the site needed to be thermally treated before it could be disposed of off-site. The rest of the excavated material could be disposed of without treatment at a hazardous or nonhazardous waste landfill. However, they said, if EPA had disposed of material on-site, it would have had to test and possibly treat 100 percent of the material to ensure that it met residential standards. Due to the potential expense of additional treatment and sampling, EPA determined that off-site disposal would be more cost- effective. For the groundwater contamination, according to site documents, EPA found that none of its remedial alternatives, including those based on extracting or treating the contamination in place, would be able to achieve its cleanup goals effectively and reliably within a reasonable time frame. For example, EPA found that some of the groundwater contaminants could take decades to move through the groundwater, and, as a result, it would take an extremely long time to remediate these contaminants using an extraction technology. Moreover, EPA estimated that the technology that was most likely to be able to achieve its remediation goals— extracting contaminants using steam—would cause significant disruption to the residential neighborhood and would be much more expensive than EPA’s other alternatives. On the basis of its experience at other sites, EPA determined that complete removal of the groundwater contamination in the bedrock at the site was not practicable. In addition, EPA found that several of the treatment technologies it considered would not be effective at treating the highly contaminated free-product creosote found in portions of the site. Community impacts. The residential nature of the site and the importance of the Rustic Mall to the community also influenced EPA’s remedy selection, given the effects that different technologies and alternatives might have on the community. For example, according to EPA, some of the substances that could be used to immobilize soil contamination in the ground were potentially more toxic than the creosote contamination. Also, certain options that treated contamination in place or extracted it from the soil or groundwater would have emitted heat or gas that could have posed risks to residents and the community. Moreover, EPA determined that some options would have significantly disrupted the community because of the need to install equipment, wells, and piping throughout the residential and commercial areas. Also, because EPA was implementing a remedial action in a residential neighborhood at the site, it was concerned about the length of the cleanup and other timing impacts on the community. Region 2 officials said that EPA generally does not use certain alternatives unless the agency has the flexibility to accomplish remediation over a long time frame on the basis of the current land use (e.g., the site is abandoned). Under these circumstances, EPA could use a remedy like bioremediation of contaminated material in place, which would cause long-term disruption if implemented in a residential neighborhood. Also, Region 2 officials said that, if EPA had used on-site LTTD to treat contaminated material, it could not have operated the unit in the most efficient way—24 hours a day— because the residents in houses within 200 feet of where the unit would have been located would have been negatively affected by its lights and noise during the night. However, the officials said, if EPA had only run the LTTD unit 8 hours a day, the cleanup effort would have taken much longer. The length of time involved was a particular concern in EPA’s evaluation of groundwater remediation alternatives. According to the Region 2 officials, the best alternative to extract contaminated groundwater from the bedrock would have taken 18 to 20 years to implement and would have covered the site with machinery. Finally, EPA factored future land use impacts into its remedy selection decisions. For example, EPA found that options that relied on containment or deed restrictions, but that left contamination under and around the residential community, were not viable alternatives. Region 2 officials said capping the contamination would not have supported use of the land as a residential area because residents would have had to sign agreements not to disturb the cap, which would have restricted their use of the properties. Also, because of these restrictions, the officials said it is likely that some owners would have refused to sign the necessary agreements, and EPA would have had to take an enforcement action. Similarly, EPA avoided certain remedies for the Rustic Mall because of the impacts that they could have on the community’s ability to redevelop the mall as well as on the operation of the mall. A Borough of Manville official told us that the Rustic Mall was the “hub of the town” and was located directly behind buildings on the town’s Main Street. As a result, he said the community was very opposed to alternatives that would have left or treated contamination on-site. He said that, in the town’s view, the contamination under the mall needed to be cleaned up. Otherwise, it would have been difficult to get tenants into the mall in the future, and the town might have ended up with a blighted area in the center of the community. He also said the community was concerned that no one would want to come and shop at the mall if there was a treatment facility in the parking lot. EPA placed a high priority on scheduling and funding the Federal Creosote site work because the contamination was in a residential area. According to Region 2 officials, it is rare to find source contamination, such as the free-product creosote, under a residential area, and most sites with the level and extent of contamination found at the Federal Creosote site are abandoned. The officials said EPA places the highest priority on addressing the principal threats at residential sites first. As evidence of this prioritization, EPA initiated efforts to study, select a remedy for, and begin cleanup of the residential part of the site before undertaking similar efforts for the Rustic Mall. For example, Region 2 officials said that EPA decided relatively early in the cleanup process to break the site into three OUs to allow work to proceed as quickly as possible. EPA determined that it needed to get to work immediately on OU1, and that the groundwater contamination and commercial area could wait until after EPA had decided what to do with the residential area. The Region 2 officials said that breaking the site into different OUs was important because EPA knew that it needed to relocate some OU1 residents, and this process can be time-consuming—one official noted that residents who must permanently relocate have 1 year to do so. While this process took less time at the Federal Creosote site, EPA did not know that would be the case initially. Moreover, the Region 2 officials said that the first couple of years EPA spent studying the site caused a great deal of anxiety for residents, because they did not understand the risks of remaining in their homes and could not sell their homes if the homes would need to be demolished. The officials said the OU1 ROD informed residents that most of the homes in the neighborhood would not need to be demolished, and this helped reduce residents’ anxiety. EPA also took steps to shorten the time needed to select, design, and implement the remedial actions. For example, Region 2 officials said that, because of the residential nature of the site, the site investigation process was both unusually extensive and expedited in comparison to other sites. Region 2 officials said that EPA began sampling early because, when the site was discovered, the agency was concerned that contamination risks could be so significant that residents might need to be evacuated. As a result, they said that the agency gathered a large amount of information about site contamination before listing the site on the NPL. The officials said this data collection effort helped EPA move forward with site work quickly because, with a large amount of data to use to gauge its overall approach to the site, EPA was able to compress the removal evaluation, listing process, and RI/FS into a relatively short amount of time. In addition, EPA tried to streamline work by configuring its sampling efforts to satisfy postexcavation requirements to confirm that contaminated material no longer remained on-site. Specifically, site documents show that to meet New Jersey requirements, EPA took samples on 30-by-30 foot grids to confirm that contamination was no longer present along the sides and bottom of an excavated area. Rather than wait until the excavation was completed to take additional samples to confirm that contamination was not present, EPA incorporated these requirements into earlier sampling efforts. As a result, if samples were clean, EPA could immediately backfill an area, which reduced the overall length of the cleanup effort. Finally, in an effort to expedite the cleanup effort, EPA Region 2 officials said that more of the region’s resources were devoted to the site relative to other sites that the region needed to address at that time. As a result of these efforts to prioritize and expedite site cleanup work, the Federal Creosote site reached key cleanup milestones in less time than some other site cleanups. Region 2 officials said that they completed the three RODs for the site in about 3 years, which they said is a very quick time frame to complete such analyses. They noted that issuing a ROD is an intensive process that at another site, for example, took over a decade. Also, the Federal Creosote site reached EPA’s construction complete stage more quickly than other megasites—that is, sites at which actual or expected total cleanup costs, including removal and remedial action costs, are expected to amount to $50 million or more. In July 2009, we reported that, based on EPA data through fiscal year 2007, the median length of time it took for megasites to reach construction complete after NPL listing was 14.8 years. However, according to EPA data, the Federal Creosote site reached construction complete in just over 9 years. Total site costs exceeded construction estimates at the Federal Creosote site by roughly $233 million, primarily because (1) EPA’s early construction estimates were not designed to include all site-related expenses and (2) additional quantities of contaminated material were discovered during the cleanup effort. Other factors, such as methodological variation for estimating site costs and contractor fraud, accounted for a smaller portion of the cost difference. According to our analysis, total site-related costs, including remedial construction and other response costs at the Federal Creosote site through the spring of 2009, were approximately $338 million, a roughly $233 million difference from the estimated remedial construction costs of $105 million. Total site costs were higher than construction estimates for several reasons. As shown in figure 3, of the $233 million difference, 39.6 percent (or about $92 million) is due to other response costs that were not included in EPA’s construction estimates; 47.5 percent (or about $111 million) is from an increase in remedial construction costs—mostly directly related to the discovery of additional contaminated material; and 12.9 percent (or about $30 million) is due to other factors—primarily differences in cost estimation methodology and, to a smaller extent, to a smaller extent, contractor fraud. contractor fraud. Other response costs not included in construction estimates (about $92 million) Remedial construction costs potentially related to greater contaminated soil quantities (about $111 million) EPA intentionally included only costs related to the construction and maintenance of the selected remedies rather than total sitewide costs in its early cost estimates, which follows its guidance, according to the agency. EPA prepares these preliminary estimates during the remedy selection process to compare projected construction costs across different remedial action alternatives. Specifically, the National Contingency Plan directs EPA to consider the capital costs of construction and any long-term operation and maintenance costs as part of the remedial alternative screening process. According to EPA guidance, these estimates are not intended to include all site-related expenses, and certain expenses, such as early site investigation and EPA enforcement costs, are beyond the scope of these early estimates because these costs are not linked to a specific remedial alternative and, therefore, would not affect the relative comparison of alternatives. For example, while site investigation studies were conducted for each operable unit, these studies were completed prior to remedy selection to inform the selection process and, therefore, were not linked to any particular remedy. Similarly, the removal cleanup of surface soils in the residential area occurred prior to remedy selection and, therefore, was not related to the construction costs of any particular remedial alternative. Table 2 summarizes costs for activities that were not included in EPA’s remedial construction cost estimates—other response costs—at the Federal Creosote site. During excavation, contractors discovered greater-than-expected amounts of contaminated material requiring remediation across all OUs, which contributed most to the difference between estimated and actual construction costs. Based on our analysis of EPA documents, the initial ROD estimates for the site indicated that approximately 154,100 to 164,400 tons of material would need to be excavated for treatment or disposal; however, EPA ultimately found that roughly 456,600 tons of material needed to be excavated—an increase of at least 178 percent. As shown in table 3, according to our analysis, increased amounts excavated from the OU1 and OU3 areas contributed the most to the difference between the estimated and actual excavated amounts across the site as a whole. According to EPA officials, it is common for EPA to remove more soil than originally estimated at Superfund sites because of the uncertainty inherent in using soil samples to estimate the extent of underground contamination. For example, EPA guidance indicates that the scope of a remedial action is expected to be continuously refined as the project progresses into the design stage and as additional site characterization data and information become available. However, both Corps and EPA officials stated that the Federal Creosote site posed a particular challenge for estimating soil quantities prior to excavation because of the way in which the waste moved at the site and, in some cases, because of access restrictions during sampling. According to EPA’s Remedial Project Manager (RPM) for the site, soil contaminants generally either stay in place or migrate straight down; however, while some of the creosote waste at the site stayed in place, some of the waste migrated both horizontally and vertically. The RPM said that this migration made it difficult to predict the waste’s location through sampling. For example, during excavation, contractors found seams of contaminated material, some of which led to additional pockets of creosote waste, while others did not. Given the diameter of the sampling boreholes (which were generally 2 to 4 inches wide) and the width of the seams of creosote waste (which in some cases were only 6 inches wide), the sampling process could not detect all of the creosote seams at the site, despite what EPA officials considered to be the extensive sampling during the early site investigations that formed the basis for the initial cost estimates. Additionally, sampling during the site investigations for the residential area as well as the Rustic Mall was limited by the location of buildings and access restrictions, according to EPA’s RPM. For example, site documents indicate that no samples could be taken from under the mall during the OU3 soil investigation because the buildings were being used. It was not until the mall owners decided to demolish the existing structures as part of a town revitalization plan that mall tenants left and EPA was able to take samples in the areas covered by the buildings. These areas were found to contain additional areas of creosote waste, as shown in figure 4. Although the mobility of the waste in the subsurface soil and sampling limitations hindered EPA’s ability to determine the total quantity of material requiring excavation during the pre-ROD site investigation when the initial cost estimates were prepared, soil sampling during this stage was generally successful at identifying which residential properties contained contamination, according to our analysis of site documents. For example, pre-ROD soil sampling allowed EPA to correctly identify 83 of the 93 residential properties that would eventually require remediation, as shown in figure 5. According to EPA guidance, because of the inherent uncertainty in estimating the extent of site contamination from early investigation data, cost estimates prepared during the RI/FS stage are based on a conceptual rather than a detailed idea of the remedial action under consideration. The guidance states that these estimates, therefore, are expected to provide sufficient information for EPA to compare alternatives on an “order of magnitude” basis, rather than to provide an exact estimate of a particular remedy’s costs. For example, the guidance also states that preliminary cost estimates prepared to compare remedial alternatives during the detailed analysis phase of the RI/FS process are expected to range from 30 percent below to 50 percent above actual costs. However, at the Federal Creosote site, actual construction costs were more than twice what EPA estimated. Specifically, we found that sitewide remedial construction costs increased by $141 million over EPA’s estimated amounts. According to site documents, increases in the quantity of material requiring excavation, transportation, treatment, or disposal resulted in higher construction costs across all OUs. Our analysis of site cost data indicated that construction costs potentially associated with the additional quantity of contaminated material accounted for most of this increase ($111 million, or about 78.7 percent). In particular, soil excavation, transportation, treatment, and disposal costs constituted approximately 56.1 percent ($62 million) of the increased construction costs potentially related to additional quantities of material, and 26.7 percent of the overall $233 million difference between estimated construction and total site costs, as shown in figure 6. According to EPA’s RPM, both the need to excavate greater amounts of material and the reclassification of excavated material from nonhazardous waste to hazardous waste affected excavation, transportation, treatment, and disposal costs. For example, the discovery of additional pockets of creosote waste increased the overall amount of material requiring excavation and treatment or disposal because, in addition to removing the waste itself, any soil overlying the contamination needed to be removed and disposed of to access the creosote waste. Additionally, if a pocket of creosote waste was unexpectedly discovered in an area of soil that had already been designated for excavation and disposal in a landfill without treatment because prior sampling indicated it was less contaminated, the overall amount of soil to be excavated would not be affected, but costs would increase because treatment is more expensive than landfill disposal. In addition, EPA and Corps officials said that the need to remediate greater quantities of material contributed to increases in other sitewide construction costs, such as general construction requirements and site restoration costs. Our analysis showed that such costs accounted for another 20.9 percent of the difference between estimated construction costs and total site costs—although the exact extent to which additional amounts of material contributed to the difference in costs is not clear. EPA’s RPM stated that the effect of increased quantities varied, depending on the OU. However, EPA and Corps officials said that in general, more extensive excavation would increase design engineering, inspection, and other costs as well as costs for general construction requirements and for site restoration, as shown in table 4. For example, the decision to remediate additional contaminated material under the Rustic Mall buildings led to increased design engineering costs because the original excavation plans were created under the assumption that the mall would remain standing, and further rounds of design sampling were needed to identify the extent and location of contamination once the buildings were demolished. Additionally, our analysis of site documents indicated that the increased time required to excavate additional material could have led to greater project costs for general construction requirements, such as temporary facility rental, site security, and health and safety costs. Similarly, site restoration costs, such as costs for backfill soil, could have increased because more backfill would be required to restore the site after excavation. According to the RPM, EPA and the Corps instituted certain controls at the site to minimize costs. In particular, the RPM stated that the Corps took steps to ensure that material was not unnecessarily excavated and sent for treatment and disposal. For example, if contractors found an unexpected pocket of creosote waste during excavation, they were required to notify the Corps official on-site, who would decide whether additional excavation was required depending upon visual inspection and additional testing, as needed. The contractor was not allowed to excavate beyond the original excavation limits without Corps approval. According to the RPM, the Corps’ approach of reevaluating the original excavation depth on the basis of additional sampling results and a visual inspection of the soil led to cost savings because in some areas less material needed to be excavated than originally planned. Furthermore, EPA and Corp officials stated that this process minimized unnecessary treatment and disposal costs that might be incurred if “clean” soil was sent for treatment or hazardous waste disposal. Additionally, EPA’s decision in November 2002 to allow treated soil to be disposed of in a nonhazardous waste facility if it met the facility’s criteria for contamination levels helped reduce unit costs for treatment and disposal because disposing of soil at a hazardous waste facility is more expensive. For example, in a bid for a contract to treat and dispose of soil following EPA’s decision, the selected subcontractor submitted a unit price for treatment and disposal at a nonhazardous waste facility that was $80 (or 16 percent) less than its unit price for treatment and disposal at a hazardous waste facility—which for that particular contract saved $800,000. Furthermore, on the basis of information gathered from site documents and from statements made by EPA and Corps officials, EPA and the Corps took other steps intended to minimize costs. For example, a Corps official said that reducing the duration of the project could help minimize certain site costs. Specifically, according to our analysis of site documents, to reduce the amount of time spent waiting for sampling results prior to backfilling an excavated area, EPA and the Corps incorporated state postexcavation sampling requirements into their design sampling plans for earlier investigations. Accordingly, unless additional excavation was required to meet the cleanup goals, these samples could be used to confirm that the boundaries of the excavation areas had been tested for contamination. Additionally, our analysis of site documents showed that the Corps tested various odor control measures before beginning excavation at certain areas of the site, which allowed it to use less expensive odor control alternatives than originally planned and saved approximately $1.1 million in implementation costs. These measures also helped to speed up the construction work. Finally, according to the RPM, the Corps was able to minimize costs by managing the work to avoid costly contractor demobilization and remobilization expenses. For example, the Corps dissuaded the contractors from removing idle equipment and worked with the RPM to resolve administrative or funding issues or questions about the work as they arose to prevent an expensive work stoppage. Other factors, including different cost-estimating methodologies and contractor fraud, explain a smaller portion of the difference between estimated construction and total site costs at the Federal Creosote site. In developing its estimates, EPA followed agency guidance, which states that as a simplifying assumption, most early cost estimates assume that all construction costs will be incurred in a single year. According to EPA, since the estimated implementation periods for EPA’s remedial actions were relatively short periods of time, EPA did not discount future construction costs in its estimates, and, therefore, these estimates were higher than they would have been otherwise. In accordance with our best practices regarding the use of discounting, we adjusted the initial cost estimates to reflect that costs were projected to accrue over several years and that, therefore, future costs should be discounted. However, by discounting future construction costs prior to adjusting for inflation, our discounted values were lower than EPA’s original estimates in site documents. According to our analysis, discounting estimated costs accounted for approximately 12 percent of the $233 million difference between estimated construction and total site costs (see fig. 7). Contractor fraud also contributed to the difference between estimated construction and total site costs, but to a small degree. However, while some parties have pled guilty to fraud, the full extent of the effect of fraud on site costs will not be known until all investigations are complete. Court documents alleged that employees of the prime contractor at the site, as well as some subcontractors, were engaged in various kickback and fraud schemes, which resulted in inflated prices for certain subcontractor services. For example, a subcontractor for soil treatment and disposal agreed to pay approximately $1.7 million in restitution to EPA for fraud in inflating its bid prices. In addition, court documents alleged that fraudulent price inflation also affected other site costs, including certain subcontracts for items such as wastewater treatment, backfill, landscaping services, and utilities. To date, our analysis of available court documents indicated that at least approximately $2.1 million in inflated payments may be directly attributable to fraud at the Federal Creosote site. On the basis of currently available information, this figure represents less than 1 percent of the difference between estimated construction and total site costs. However, since the fraud investigations are ongoing and additional charges may be filed, the full extent of contractor fraud is not currently known. See appendix I for more information about site-related fraud investigations. EPA managed the overall cleanup and communicated with residents through a dedicated on-site staff presence, among other actions. The Corps implemented the cleanup work by hiring and overseeing contractors; the Corps was less involved in selecting and overseeing subcontractors at the site. According to a 1984 interagency agreement between EPA and the Corps for the cleanup of Superfund sites, EPA maintains statutory responsibility for implementing the Superfund program. In addition to selecting the remedy at a site, EPA provides overall management of the cleanup, ensures that adequate funding is available, and manages relationships with other interested parties, such as residents. If EPA decides that Corps assistance is needed to conduct cleanup work, EPA establishes site- specific interagency agreements. These agreements outline the specific tasks and responsibilities of the Corps at the site and provide a proposed budget for the activities listed. Once the site-specific agreements are established, EPA’s primary responsibilities are to make sure that the work continues without interruption and that adequate funding is available, according to EPA officials. EPA officials also noted that the agency does not have the authority to direct Corps contractors at the site; rather, all instruction and direction to contractors goes through the Corps. To fulfill its project management and community outreach responsibilities, EPA dedicated a full-time RPM to the Federal Creosote site, according to Region 2 officials. Although RPMs generally have two or more sites for which they are responsible at any given time, Region 2 officials stated that the size and complexity of the site required a higher level of EPA involvement. For example, the officials said that the relatively large size of the site and stringent cleanup goals meant that a large area was excavated, and the complexity of the cleanup process led to a greater number of questions from the Corps and its contractors that required EPA’s attention. According to the officials, the RPM was on-site at least two to three times per week; however, during some segments of the work, he was on-site almost every day. They noted that the design phase in particular required close coordination with the Corps because design activities for different areas of the site occurred simultaneously and were often concurrent with construction. Consequently, the RPM said he was on-site working with the Corps and its design contractor to design new phases of the work; revise existing designs; and answer any questions regarding ongoing construction activity, such as whether to excavate additional pockets of waste found during the construction phase. According to the RPM, although the Corps was required to ask EPA for approval only to expand excavation to properties that were not included in the RODs, in practice, Corps officials kept him informed whenever additional excavation was required, and, in many cases, he made the decision regarding whether to broaden or deepen the excavated area. To monitor project progress and funding, the RPM had weekly on-site meetings with the Corps and received weekly and monthly reports on progress and site expenditures, according to EPA officials. At the weekly meetings, the RPM would answer Corps questions regarding the work and be informed of any contracting or subcontracting issues that might delay or stop work at the site. Moreover, as part of EPA’s oversight of site progress, the RPM said he reviewed Corps documents regarding any changes in the scope of the work. Because EPA provided funding to the Corps on an incremental basis, the RPM also closely monitored the rate of Corps expenditures to ensure sufficient funding to continue the work, according to EPA officials. The RPM explained that he also reviewed Corps cost information for unusual charges and, with the exception of a few instances of labor charge discrepancies, most of the time the Corps reports did not contain anything surprising. In the few instances where the RPM found a discrepancy, he contacted Corps officials, and they were able to explain the reason for the discrepancy—for example, a problem with the Corps’ billing software. The RPM stated that, under the interagency agreement with the Corps, he did not review contractor invoices or expenditures because the Corps had both the responsibility and the expertise necessary to determine whether the contractor charges were appropriate, given the assigned work. Additionally, EPA officials stated that the residential nature of the site necessitated a substantial investment in community relations to manage residents’ concerns about the contaminated material under their homes and the Rustic Mall. As part of these efforts, EPA used such tools as flyers, newsletters, resident meetings, and media interviews to communicate with concerned citizens. According to the RPM, managing community relations required the second largest commitment of his time, after designing the work. He said that he spent a great deal of time working with residents to help them understand the situation during the early site investigation stage, when it was not clear who was going to need to move out of their homes and residents were concerned about their health and property. The RPM said that he also worked personally with residents during the design and implementation of the remedy to minimize the impact to the community and to inform it of any additional actions needed, such as excavating contamination across a property line or closing roads. According to site documents and a local official, EPA’s community relations efforts were successful at reducing residents’ anxieties. For example, in a summary of lessons learned from the cleanup effort, site documents indicate that EPA’s policy of promptly responding to community inquiries and the regular presence of EPA personnel at the site helped to establish and preserve a high level of public acceptance and trust with the community. Also, a Borough of Manville official noted that the continuity provided by having one RPM dedicated to the site for the duration of the project was particularly helpful in maintaining good communication because it allowed EPA officials to know almost all of the residents on a first-name basis and encouraged their participation in the cleanup process. For example, the RPM stated that he worked closely with residents to address their concerns and minimize impacts to the community during the excavation of contaminated material and the restoration of affected areas of the neighborhood. Similarly, according to the Borough of Manville official, EPA and the contractors effectively coordinated with town officials to ensure that the cleanup effort went smoothly. For example, to minimize disruption, EPA consulted with town officials about which roads would be best to use, considering the routes and weight limitations of trucks leaving the site. In the official’s view, EPA’s outreach efforts ensured that residents and the community as a whole had sufficient information to feel comfortable about the cleanup. Consequently, despite the size and scope of the cleanup effort, the official could recall very few complaints from residents. At the Federal Creosote site, the Corps selected and oversaw private contractors’ design and implementation of the remedial action; however, the Corps was less involved in the subcontracting process. Under the 1984 interagency agreement with EPA, the Corps selects and oversees private contractors for all design, construction, and other related tasks at Superfund sites, in accordance with Corps procedures and procurement regulations. According to Corps officials, the Corps selected a contractor to perform the design for the three OUs at the Federal Creosote site from a list of qualified vendors and then negotiated a price for the contracts. For construction, the Corps selected a prime contractor from a pool of eligible contractors under a cost-reimbursement, indefinite-delivery/indefinite- quantity (IDIQ) contract. According to EPA and Corps guidance, this system provides more flexible and responsive contracting capabilities for Superfund sites, which may require a quick response and often lack a sufficiently defined scope of work for price negotiation. The Corps’ prime contractor performed some of the work and subcontracted some tasks to other companies. For example, the prime contractor excavated contaminated material but awarded subcontracts for transportation, treatment, and disposal of the excavated material. Other subcontracted services included providing backfill soil and landscaping for site restoration, and treating wastewater. To subcontract, the prime contractor solicited bids from potential vendors and, for smaller subcontracts, provided the Corps with advance notification of the award. To award larger subcontracts, the prime contractor requested Corps approval. To carry out its oversight responsibilities, the Corps monitored changes in the scope of the work, contractor progress and costs, and work quality. For example, Corps officials stated the following: The Corps had to approve any changes in project scope, such as excavating greater quantities of material, or any increases in other construction services or materials beyond the amounts originally negotiated between the Corps and the prime contractor. According to EPA officials, this chain of command helped prevent any unauthorized expansion of work at the site. To monitor project progress and contractor costs during construction, the Corps reviewed prime contractor cost summary reports for each phase of the work. These reports contained detailed information on contractor costs and work progress, and, according to Corps officials, they were updated, reviewed, and corrected if necessary on a daily, weekly, and monthly basis. For example, Corps officials explained that they reviewed the daily reports primarily for accuracy and unallowable costs. For weekly and monthly reports, the Corps also examined whether the contractor was incurring costs more quickly than expected, which could indicate that a cost was incorrectly attributed or that a change in project scope was necessary (i.e., because particular aspects of the work were more costly than anticipated, and, therefore, a scope revision was needed to complete planned activities). However, Corps officials commented that the contractor data were generally accurate, and that errors were infrequent. The officials also said that, during the most active periods of the work, they discussed the cost reports and project progress, including any potential changes in unit costs, during the weekly meetings with the contractor. The Corps also monitored work quality at the site. According to site documents, the Corps was required to implement a quality assurance plan as part of its oversight responsibilities and had a quality assurance representative at the site during construction. For example, in a July 2002 notice to the prime contractor, the Corps identified several workmanship deficiencies that the contractor had to address to retain its contract for that portion of the work. According to Corps guidance and officials, the Corps had a limited role in the subcontracting process at the Federal Creosote site. For example, the prime contractor was responsible for selecting and overseeing subcontractors. In particular, Corps guidance states that since subcontracts are agreements solely between the prime contractor and the subcontractor, the Corps does not have the authority to enforce the subcontract provisions. Rather, the guidance indicates that the Corps oversees the prime contractor’s management systems for awarding and administering subcontracts through periodic reviews of the contractor’s subcontracting processes and ongoing reviews of subcontract awards. According to Corps officials, the Corps’ main responsibility in the subcontracting process at the Federal Creosote site was to review subcontract decisions and approve subcontracts above a certain dollar threshold. As Corps officials explained, subcontracts between $25,000 and $100,000 did not need to be approved by the Corps; rather, the prime contractor sent the Corps an “advance notification” package, which documented that the contractor had competitively solicited the work and why the contractor selected a particular subcontractor over others. However, for subcontracts greater than $100,000, the prime contractor had to submit a “request for consent” package to the Corps, which contained similar documentation as an advance notification but required Corps approval prior to awarding a subcontract. According to federal acquisition regulations and policies, when evaluating request for consent packages, Corps contracting officers should consider whether there was sufficient price competition, adequate cost or price comparison, and a sound basis for selecting a particular subcontractor over others, among other factors. Early in the project, the Corps identified several issues with the prime contractor’s performance at the site, including the award of subcontracts. According to a letter the Corps sent to the prime contractor, the Corps noted that after repeated unsuccessful attempts to address these issues, the Corps would initiate proceedings to terminate the contract for site work unless the contractor took corrective action. However, Corps officials said the contractor demonstrated sufficient improvement in its documentation practices. Then, in 2003, the Corps raised the request for consent threshold from $100,000 to $500,000 because of the high volume of these packages that the Corps was receiving. A Corps official noted that while the Corps reviews and consents to the subcontracting decisions of its contractors as appropriate, it avoids becoming too involved in the subcontracting process because of bid protest rules regarding agency involvement in that process. According to the official, under these rules, a subcontract bidder cannot protest a subcontract award unless it can show that the overseeing agency was overly involved in the subcontracting process. Concerning contractors at the Federal Creosote site, the Department of Justice and EPA’s Office of Inspector General have ongoing investigations, some of which have resulted in allegations of fraud committed by employees of the prime contractor and several subcontracting firms. For example, court documents alleged bid-rigging, kickbacks, and other fraudulent activity related to the award of several subcontracts for a variety of services and materials. According to Corps officials, the Corps did not suspect issues of fraud in the subcontracting process until 2004 when, in one instance, a subcontract bidder objected to the award of a soil transportation, treatment, and disposal subcontract to another firm whose bid was substantially higher. Upon further review of the documents, Corps officials found that the prime contractor had not conducted a proper evaluation of the bid proposals, and the Corps withdrew its consent to the subcontract—ultimately requesting that the prime contractor solicit bids under a different process. In the revised bidding process, the firm that had won the earlier subcontract reduced its price from $482.50 to $401.00 per ton of contaminated material—only 70 cents below the competing bid submitted by the firm that had protested the original subcontract. On this basis, the prime contractor again requested consent to subcontract with the firm to which it had awarded the earlier subcontract. According to a Corps official, the Corps was suspicious of illegal activity given how close the two bids were, and Corps officials discussed whether to take formal action against the prime contractor. However, Corps officials decided they did not have sufficient evidence of wrongdoing to support a serious action but did cooperate with others’ investigations of fraud at the site. For more information on site-related fraud, see appendix I. We provided a draft of this report to the Secretary of the Army and the Administrator of the Environmental Protection Agency for review and comment. The Secretary, on behalf of the Corps of Engineers, had no comments on the draft report. EPA generally agreed with our findings regarding the agency’s actions and costs to clean up the Federal Creosote site, and provided a number of technical comments, which we incorporated as appropriate. EPA’s written comments are presented in appendix IV. In its comments, EPA noted that the draft report accurately described the cleanup of the site and correctly compared the site’s estimated and final remedial construction costs. However, EPA stated that comparing estimated remedial construction costs to total site costs is not an “apples to apples” comparison because some costs, such as amounts spent on removal actions or EPA personnel salaries (referred to as “other response costs” in this report), are purposely excluded from EPA’s early estimates of remedial construction costs. We agree that to identify the extent to which site costs increased over agency estimates, one should only compare estimated and actual remedial construction costs, as we do in table 4 of this report. However, our objective was, more broadly, to identify what factors contributed to the difference between the estimated remedial construction costs ($105 million) and the actual total site costs ($338 million). We found that the difference between these two amounts was $141 million in remedial construction cost increases—which were largely due to increases in the amount of contaminated material requiring remediation—and $92 million in other response costs that were not included in EPA’s original estimates. We believe it was necessary to provide information on these other response costs to more fully answer our objective and to provide a more informative accounting of the total costs that EPA incurred in cleaning up the Federal Creosote site. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this report. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of the Army, the Administrator of the Environmental Protection Agency, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. Court records show that several cases have been brought concerning the Federal Creosote site cleanup. First, the Department of Justice (Justice) and the state of New Jersey have filed claims to recover cleanup costs. Second, Justice has brought criminal charges in a series of cases against one employee of the prime contractor, three subcontractor companies, and eight associated individuals involved in the cleanup, alleging fraud, among other things. Third, the prime contractor has brought a civil suit against a former employee alleged to have committed fraud and other offenses during his employment as well as against associated subcontractors. The information in this appendix provides a brief summary of known actions related to the Federal Creosote site cleanup. United States v. Tronox, LLC: The Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA) provides that parties incurring costs to respond to a release or threatened release of a hazardous substance may recover such costs from legally responsible parties, including persons who owned or operated a site, among others. In this regard, the Environmental Protection Agency (EPA) identified Tronox, LLC, the successor to the companies that owned and operated the Federal Creosote site, and, for 2 years, EPA and Tronox participated in alternative dispute resolution concerning EPA’s cost recovery claims. In August 2008, Justice, on behalf of EPA, filed a civil action in the United States District Court for the District of New Jersey against Tronox, seeking recovery of costs that the government incurred for the Federal Creosote site cleanup. The complaint asserted that the government had incurred at least $280 million in response costs and would incur additional costs. In October 2008, the New Jersey Department of Environmental Protection and the Administrator of the New Jersey Spill Compensation Fund filed suit in the same court against Tronox, seeking recovery of costs incurred for the site, as well as damages for injury to natural resources—under both CERCLA and the New Jersey Spill Compensation Act—and public nuisance and trespass claims. In December 2008, the federal and state cases were consolidated. Tronox has stated its intent to vigorously defend against these claims. In early 2009, Tronox filed for voluntary Chapter 11 bankruptcy in federal bankruptcy court and initiated an adversary proceeding in that court, seeking a declaratory judgment on the status of the EPA and New Jersey claims with respect to the bankruptcy. Subsequently, both courts entered a stipulation filed by both the government plaintiffs and Tronox to stay the cost recovery case as well as the adversary proceeding to allow the parties to resolve the claims. As of the date of this report, the stays remain in effect. United States v. Stoerr: Norman Stoerr, a former employee of the prime contractor at the Federal Creosote site, pled guilty to three counts related to his activities as a contracts administrator at the site. Court documents alleged that over a 1-year period, the employee conspired with others to rig bids for one subcontractor at the site, resulting in EPA being charged inflated prices. In addition, the documents alleged that over several years, the employee solicited and accepted kickbacks from certain subcontractors at the Federal Creosote site and another site, and allowed the kickbacks to be fraudulently included in subcontract prices that were charged to EPA. To date, Stoerr has not been sentenced. United States v. McDonald et al: In August 2009, the United States indicted Gordon McDonald—a former employee of the prime contractor at the Federal Creosote site—as well as representatives of two subcontractors who worked at the site, for various counts, including kickbacks and fraud. The indictment charged that the prime contractor’s employee, a project manager, solicited and accepted kickbacks from certain subcontractors in exchange for the award of site work, and that these kickbacks resulted in EPA being charged an inflated price for the subcontractors’ work. The indictment also charged that the project manager disclosed the bid prices of other vendors during the subcontracting process, which resulted in the government paying a higher price for services than it would have otherwise paid. One of the indicted employees (James Haas)—representing a subcontractor who provided backfill material to the site—has pled guilty of providing kickbacks and submitting a bid that was fraudulently inflated by at least $0.50 per ton of material. Haas agreed to pay more than $53,000 in restitution to EPA as part of his guilty plea, and has been sentenced to serve 33 months in jail and to pay a $30,000 criminal fine. McDonald’s case is proceeding, and charges against a third defendant are still pending. United States v. Bennett Environmental, Inc.: Bennett Environm Inc. (BEI), a subcontractor providing soil treatment and disposal services to the Federal Creosote site cleanup, entered a plea agreement admitting to one count of fraud conspiracy. Court documents alleged that over 2 years, the company paid kickbacks to an employee or employees of the prime contractor, in return for receiving favorable treatment in the award of subcontracts, and inflated its prices charged to EPA. BEI was sentence to 5 years’ probation and ordered to pay $1.662 million in restitution to EPA, plus a $1 million fine. United States v. Tejpar: Zul Tejpar, a former employee of BEI, entered a plea of guilty to one count of fraud conspiracy. Court documents alleged that Tejpar, along with coconspirators, provided kickbacks to employees of the prime contractor to influence the award of subcontracts at the site and fraudulently inflated the company’s bid price after an employee of the prime contractor revealed the other bid prices. To date, T ejpar is awaiting sentencing. United States v. Griffiths: Robert P. Griffiths entered a plea of guilty to three counts related to fraudulent activity at the Federal Creosote site when he was an officer of BEI. Griffiths, along with coconspirators, provided kickbacks to employees of the prime contractor to influence the award of subcontracts at the site, fraudulently inflated the company’s invoices that the prime contractor charged to EPA, and fraudulently received the bid prices of other bidders prior to award of a subcontract. To date, Griffiths is awaiting sentencing. United States v. JMJ Environmental, Inc.: JMJ Environmental, Inc., a subcontractor providing wastewater treatment supplies and services, and John Drimak, Jr., its president, entered guilty pleas related to fraudulent activity at the Federal Creosote site and another site. At the Federal Creosote site, JMJ Environmental and Drimak, along with coconspirators, provided kickbacks to employees of the prime contractor to influence the award of subcontracts at the site, fraudulently inflated the company’s prices that the prime contractor charged to EPA, and arranged for intentionally high, noncompetitive bids from other vendors. To date, JMJ Environmental and Drimak are awaiting sentencing. United States v. Tranchina: Christopher Tranchina, an employee of subcontractor Ray Angelini, Inc., which provided electrical services and supplies, entered a plea of guilty to fraud conspiracy for activities at the Federal Creosote site. Tranchina, along with coconspirators, provided kickbacks to employees of the prime contractor to influence the award of subcontracts at the site and fraudulently inflated the company’s prices that the prime contractor charged to EPA. Tranchina was sentenced to imprisonment of 20 months and ordered to pay $154,597 in restitution to EPA. United States v. Landgraber: Frederick Landgraber, president of subcontractor Elite Landscaping, Inc., entered a plea of guilty to fraud conspiracy for activities at the Federal Creosote site. Landgraber, along with coconspirators, provided kickbacks to employees of the prime contractor to influence the award of subcontracts at the site and submitted fraudulent bids from fictitious vendors to give the appearance tive process, resulting in EPA paying higher prices than if of a competi procurement regulations were followed. Landgraber was sentenced to imprisonment of 5 months and ordered to pay $35,000 in restitution to EPA and a $5,000 fine. United States v. Boski: National Industrial Supply, LLC, a pipe supply company, and coowner Victor Boski entered guilty pleas for fraud conspiracy at the Federal Creosote site and another site. At the Feder Creosote site, National Industrial Supply and Boski, along with coconspirators, provided kickbacks to em to influence the award of subcontracts at the site and fraudulently inflated ms the company’s prices that the prime contractor charged to EPA. The terployees of the prime contractor of the plea agreement require National Industrial Supply and Boski to have available $60,000 to satisfy any restitution or fine imposed by the court, among other items. To date, they are awaiting sentencing. This appendix provides information on the scope of work and methodology used to examine (1) how EPA assessed the risks and selected remedies for the Federal Creosote site, and what priority EPA assigned to site cleanup; (2) what factors contributed to the difference between the estimated and actual remediation costs of the site; and (3) how responsibilities for implementing and overseeing the site work w divided between EPA and the U.S. Army Corps of Engineers (the Corp also discusses our methodology for summarizing criminal and civil litigation related to the Federal Creosote site. To examine how EPA assessed the risks and selected remedies for the Federal Creosote site, as well as what priority it assigned to the clean we reviewed EPA’s Superfund site investigation and cleanup processes, including applicable statutes, regulations, and agency guidance. We also reviewed documentation from the site’s administrative record, which detailed the agency’s activities and decisions at the site. As part of this review, we analyzed public comments that were documented in site records of decision to identify key issues with the cleanup effort. To obtain additional information on these and other site cleanup issues, we interviewed EPA Region 2 officials involved with the site, including officials from the Emergency and Remedial Response Division, the Public Affairs Division, and the Office of Regional Counsel. Furthermore, we interviewed and reviewed documentation obtained from officials with the Agency for Toxic Substances and Disease Registry regarding its determination of site risks. We also consulted with New Jersey and Borough of Manville officials to obtain their views on the cleanup effort. Finally, we interviewed representatives of the potentially responsible party for the site to obtain the party’s views on EPA’s risk assessment, remedy selection, and site prioritization. To determine what factors contributed to the differences between the estimated and actual costs of site cleanup, we obtained and analyzed data on estimated and actual site costs from several sources. For estimated site costs, we combined EPA’s estimates for selected remedies from site records of decision and remedial alternative evaluations. In developing these estimates, EPA applied a simplifying assumption that all construction costs would be incurred in a single year, and, therefore did not discount future construction costs, even though work was projected to occur several years into the future as a result of design activities and resident relocations as well as EPA’s estimated constructio time frames. However, our discount rate policy guidance recommends that we apply a discount factor to future costs. Consequently, to convert EPA’s estimated costs into fiscal year 2009 dollars, we (1) conducted present value analysis to discount future site c original estimate (base year) for each remedy, using EPA’s recommended discount rate of 7 percent, and (2) converted the present value of each estimate into fiscal year 2009 dollars. To calculate the present value of osts to the dollar year of the estimated costs, we identified the projected construction time frames for each remedy from site documents. Because the documents did not provid information on how construction costs would be distributed over the projected time frame, we calculated the midpoint of a range of values, assuming that all costs for particular activities comprising EPA’s sele remedies would either be incurred at the beginning of the projected t frame (the maximum value of these costs) or at the end of the projected time frame (the minimum value). To adjust the present values from t base year to fiscal year 2009 constant dollars, we divided the present values by the inflation index for the base year and weighted the calculation to convert the base year from calendar years to fiscal years. To identify actual sitewide costs, we compiled data from multiple so including EPA’s Superfund Cost Recovery Package Imaging and On-Line System (SCORPIOS) for data on site costs through April 30, 2009; the Corps of Engineers Financial Management System (CEFMS) for data on gh various dates in April and early May, Corps and contractor costs throu 2009; and contractor-generated project cost summary reports for data on urces, contractor costs for each phase of the cleanup through Februar We relied on multiple data sources for our analysis because none of the sources provided a sufficient level of specificity for us to comprehe determine when and for what purpose costs were incurred. In partic the SCORPIOS data provided specific dates of when EPA incurred costs but for some costs, especially those related to site construction work, th e data did not generally provide detailed information on why the costs were incurred. Therefore, to obtain more detailed information on the reason f incurring certain costs, we used the data from CEFMS and the contractor’s project cost summary reports. However, the CEFMS and contractor project cost summary report data did not generally provide specific information on when costs were incurred. Consequently, to determine actual site costs in fiscal year 2009 dollars, we used two approaches. For costs taken from the SCORPIOS data or when detailed information on the date of a particular cost was available, we applied the inflation index f the particular fiscal year in which EPA incurred the cost. For costs take from the other data sources, we used the midpoint of the range of inflation-adjusted values for the construction start and end dates for individual work phases, as recorded in site documents. y 15, 2009. We worked with EPA Region 2 officials to categorize site costs, includ those that were part of EPA’s original construction estimates as those that were not part of EPA’s estimates. After identifying the costs th were not included in EPA’s original estimates, we took the difference between estimated and actual construction costs, according to categories that we discussed with EPA, to identify where actual costs changed the most from EPA’s estimates. Then, to identify the factors that contribute d the most to the difference in these cost categories, we analyzed the typesof costs in each category and interviewed EPA Region 2 and Corps officials responsible for the cleanup. In addition, we analyzed data from site documents on the estimated and actual amounts of contaminated material at various stages of the cleanup process to obtain further information on the extent to which increased amounts of contaminated material affected site costs. To examine the impact of alternative methodologies on the disparity between estimated and actual costs, we reviewed EPA cost-estimating guidance and calculated the effect of discounting future estimated costs within our analysis. To determine how fraud impacted site costs, we reviewed civil and criminal litigation documents describing the monetary values exchanged in various schemes. To ensure the reliability of the actual cost data we used for this report, we reviewed the data obtained from the SCORPIOS and CEFMS databases as well as the contractor-generated cost summary reports that the Corps provided. For each of these data sources, we reviewed agency documents and interviewed EPA and Corps officials to obtain information on their data reliability controls. We also electronically reviewed the data and compared them across all sources as well as with other information on sit costs as available. For example, we compared contractor cost data provided by the Corps with similar data from the contractor-generated cost summary reports. Similarly, we compared Corps cost data from CEFMS with analogous data from EPA’s SCORPIOS database. Generally, we found that discrepancies among comparable data from different sources were most likely attributable to the potential delay between when a cost is incurred by a contractor and when it is invoiced and processed, first by the Corps and later by EPA. On the basis of our evaluation of these sources, we concluded that the data we collected and analyzed were sufficiently reliable for our purposes. However, because some costs incurred prior to early May 2009 may not have been processed through the Corps and EPA’s cost-tracking systems at the time of data collection, site cost data in this report are considered to be approximate. Moreover, because our methodology relied on calculating the midpoint of a range of costs for both the present value calculations and adjusting data for inflation, we consider the data we present in this report on estimated and actual costs and the difference between these costs also to be approximate. reviewed agency guidance regarding EPA’s responsibilities at Superf sites. To obtain information on EPA’s oversight actions, we interviewed EPA and Corps officials responsible for site cleanup and contracting We also reviewed site meeting minutes, monthly progress reports, correspond General reports. To further describe the Corps’ responsibilities at the Federal Creosote site, we reviewed Corps guidance for the cleanup of hazardous waste projects, Corps contract management best practices, and t relevant procurement regulations. To obtain information on actions tha the Corps took to implement its site responsibilities, we reviewed Corps correspondence to the contractor and contractor requests for approval o soil treatment and disposal subcontracts. We also interviewed Corps officials responsible for site cleanup and contracting work as well as EPA orps’ Region 2 officials. However, we did not assess the adequacy of the C efforts or its compliance with Corps guidance and federal procurement regulations. work. ence to the Corps, and relevant EPA Office of Inspector To examine issues regarding civil and criminal litigation related to th Federal Creosote site, we collected case data from the Public Access to Court Electronic Records system. We then qualitatively analyzed documents obtained from this system to identify the issues involved and the status of each case as well as the outcomes, if any, of the cases. However, because criminal investigations are ongoing and confidenti could not determine whether any additional criminal charges were under consideration, but relied solely on the publicly available information for charges that had been filed as of November 2009. In addition to the individual named above, Vincent P. Price, Assistant Director; Carmen Donohue; Maura Hardy; Christopher Murray; Ira Nichols-Barrer; and Lisa Van Arsdale made key contributions to this report. Elizabeth Beardsley, Nancy Crothers, Alexandra Dew, Richard Johnson, and Anne Stevens also made important contributions. | In the 1990s, creosote was discovered under a residential neighborhood in Manville, New Jersey. Creosote, a mixture of chemicals, is used to preserve wood products, such as railroad ties. Some of the chemicals in creosote may cause cancer, according to the Environmental Protection Agency (EPA). EPA found that creosote from a former wood-treatment facility (known as the Federal Creosote site) had contaminated soil and groundwater at the site. Under the Superfund program--the federal government's principal program to clean up hazardous waste--EPA assessed site risks, selected remedies, and worked with the U.S. Army Corps of Engineers to clean up the site. As of May 2009, construction of EPA's remedies for the site had been completed; however, total site costs were almost $340 million and remedial construction costs had exceeded original estimates. In this context, GAO was asked to examine (1) how EPA assessed risks and selected remedies for the site, and what priority EPA gave to site cleanup; (2) what factors contributed to the difference between the estimated and actual costs; and (3) how EPA and the Corps divided responsibilities for site work. GAO analyzed EPA and Corps documents and data on the cleanup effort and its costs, and interviewed officials from these agencies. This report contains no recommendations. EPA generally agreed with GAO's findings on the agency's cleanup costs and actions, while the U.S. Army Corps of Engineers had no comments. The extent of the contamination in a residential area at the Federal Creosote site was the primary factor influencing EPA's risk assessment conclusions, remedy selection decisions, and how EPA prioritized site work, according to site documents and agency officials. EPA assessed site contamination through multiple rounds of evaluation and concluded that soil and groundwater contamination levels were high enough that EPA needed to take action. Then, EPA evaluated remedies to achieve cleanup goals that it had established for the site and that were consistent with its residential use. EPA selected off-site treatment and disposal of the contaminated soil and long-term monitoring of the groundwater contamination as the remedies for the site. In selecting these remedies, EPA considered a range of alternatives but ultimately determined that certain options would be potentially infeasible or ineffective due to the residential setting. For example, EPA chose not to implement certain alternatives on-site because the agency found that there was insufficient space and they would be too disruptive to nearby residents. In addition, EPA chose not to implement certain alternatives because the agency found that they would be unlikely to achieve the cleanup goals for the site, especially considering the high level of treatment required to allow for unrestricted residential use of the area and the high levels of contamination found at the site. EPA made cleanup of the site a high priority because the contamination was in a residential area. For example, EPA took steps to shorten the cleanup period and prioritized the use of regional Superfund resources on the Federal Creosote site over other sites in the region. The $338 million in total site costs exceeded EPA's estimated remedial construction costs of $105 million by about $233 million, primarily because EPA's estimates focused only on construction costs, and EPA discovered additional contamination during the cleanup effort. EPA prepared preliminary cost estimates during the remedy selection process; however, EPA requires that these estimates include only the costs associated with implementing different remedies it was considering, not all site costs. Also, as a result of the movement of contamination in the ground and sampling limitations during EPA's site investigation, a greater-than-expected amount of contamination was discovered during the cleanup effort, which increased costs. Other factors, such as contractor fraud, affected total site costs to a lesser extent. EPA was responsible for managing the overall site cleanup and community relations, while the Corps was responsible for implementing the cleanup. EPA dedicated a full-time staff member to manage the site cleanup who, according to EPA, maintained a significant on-site presence to ensure that the project remained on schedule and was adequately funded and to work with residents. EPA also oversaw the work of the Corps and its costs. To conduct the actual cleanup work, the Corps hired contractors to design or implement cleanup activities who, in turn, hired subcontractors for some tasks. The Corps oversaw the activities and costs of its primary contractors but, according to Corps officials, was less involved in selecting and overseeing subcontractors. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
EOS is the centerpiece of NASA’s Mission to Planet Earth, whose overall goal is to understand the total earth system (air, water, land, life, and their interactions) and the effects of natural and human-induced changes on the global environment. EOS has three major components: (1) a constellation of satellites designed to collect at least 15 years of key climate-related data; (2) a data and information system designed to operate the satellites and process, archive, and distribute the data; and (3) teams of scientists who develop algorithms for converting sensor data into useful information and conduct basic research using the information. The satellites, and data and information system, which will absorb most of the program’s funding, provide the researchers with measurements that will enable them to address established research priorities. EOS is designed to make 24 types of long-term measurements of solar irradiance and the earth’s atmosphere, land cover, ice sheets, and oceans from orbiting spacecraft. By 2002, when the full constellation will be in orbit, EOS will be generating data from 25 instruments on at least 10 spacecraft. Over the 20-year EOS data-collection phase, about 80 instruments will be launched on more than 30 satellites. As currently planned, the last EOS satellite will cease operations in 2020. EOS measurements will support researchers’ efforts to address Mission to Planet Earth’s research priorities: (1) determine the causes and consequences of changes in atmospheric ozone; (2) improve seasonal-to-interannual climate prediction; (3) determine the mechanisms of long-term climate variability; (4) document changes in land cover, biodiversity, and global productivity; and (5) understand earth processes that can lead to natural disasters and develop risk assessment capabilities for vulnerable regions. Mission to Planet Earth is NASA’s contribution to the governmentwide U.S. Global Change Research Program. An important goal of these interconnected efforts is to improve the predictive capability of numerical earth system models, especially global climate models that investigate and predict the general circulation of the atmosphere and ocean. NASA has identified a potentially large and diverse “user community” for EOS-related information. Members of this community could be, for example, educators, businessmen, and public policymakers. The focus of our analysis, however, is the EOS basic research community, by which we mean NASA’s currently funded EOS interdisciplinary science and instrument investigations. In our June 1995 report, we estimated that funding requirements of the EOS baseline program would total about $33 billion for fiscal years 1991 to 2022. This estimate was developed for the program described in NASA’s 1995 EOS reference handbook and included costs for satellites, launch services, data systems, science, construction of facilities, and civil service personnel. However, NASA later recognized that this program was not affordable in an environment of declining budgets and began studying ways to cut costs by using advanced technology and increasing collaboration with other agencies, international partners, and the commercial sector. NASA intended to use these future savings to fund more science under EOS and to reduce the program’s total cost. Over the past several years, the Congress has progressively reduced NASA’s planned spending on EOS for fiscal years 1990 to 2000 from $17 billion to $7.25 billion. In response, NASA changed EOS in 1991 and 1992 from a complete earth system measuring program that would have supported a wide array of global change investigations to a measurement program that will primarily support investigations of global changes to the earth’s climate. For example, NASA dropped the measurement of upper atmospheric chemistry and solid earth processes. Other changes followed in order to further adjust EOS to its progressively lower budget profile through 2000. NASA officials stated the current planned spending for EOS through 2000 is about $6.8 billion. The administration’s fiscal year 1997 request for Mission to Planet Earth is $1.402 billion, of which $846.8 million is for development of EOS’ data and information system, spacecraft, instruments, and algorithms. NASA’s request includes $47.5 million for EOS interdisciplinary science. According to NASA’s 5-year plan based on its fiscal year 1996 budget submission, NASA intends to increase spending on EOS interdisciplinary science to $73.2 million per year in fiscal year 2000. Like EOS-related space systems and information systems, the development of the EOS basic research community that will conduct interdisciplinary global climate change research requires planning. The current number of EOS investigations funded by NASA is relatively small, and NASA recognizes that it needs to increase their number, broaden the membership of EOS science teams, and take other steps to develop and sustain an EOS-era research community. NASA’s strategy for developing the EOS research community is partly based on increased funding. In 1995, it began efforts to fund additional investigations and to reevaluate the current investigations. NASA’s ability to add more investigations is uncertain within its expected future budgets, especially if it must depend on savings from improved technology and increased collaboration with others. The EOS program is currently funding 29 interdisciplinary science investigations that were selected in 1989 and 1990 to use data from EOS instruments in more than one earth science discipline, such as geology, oceanography, meteorology, and climatology. Scientists associated with these investigations serve as members of the Investigator Working Group, developing detailed science plans and assisting NASA in optimizing the scientific return of the EOS mission. Currently, these 29 investigations are led by 31 interdisciplinary principal investigators (2 of the interdisciplinary science investigations have coprincipal investigators). There are 354 coinvestigators associated with the 29 interdisciplinary science investigations, as well as 20 instrument principal investigators/team leaders and 197 other instrument team members. The number of EOS investigations is relatively small when compared with (1) the number of currently funded investigations associated with two pre-EOS missions—the Upper Atmosphere Research Satellite (UARS) and the U.S.-French Oceanography Satellite Ocean Topography Experiment (TOPEX/Poseidon)—to their EOS-era counterparts and (2) the ratio of the number of investigations to the raw data acquisition rate expected from instruments on EOS spacecraft to the number of investigations and raw data acquisition rates of UARS and TOPEX. The comparison is based on the following EOS spacecraft and instruments: AM; PM; Chemistry mission (CHEM); Landsat-7; Radar ALT; Laser ALT; Stratospheric Aerosol and Gas Experiment (SAGE) III on space station; and Solar Stellar Irradiance Comparison Experiment (SOLSTICE), Active Cavity Radiometer Irradiance Monitor (ACRIM), and Clouds and Earth’s Radiant Energy System (CERES) on flights of opportunity. The data rates of the EOS spacecraft and UARS/TOPEX are not strictly comparable because the instruments on the latter satellites do not directly observe the Earth. Imaging instruments are more data intensive than nonimaging instruments. However, data rate comparisons can serve as a rough indicator of the magnitude of potential research opportunities afforded by EOS and two pre-EOS-era missions. The National Aeronautics and Space Administration (NASA) used similar comparisons in its 1993 and 1995 editions of the EOS reference handbook. In the 1995 edition, NASA graphically compared the combined data rates of EOS-era satellites with the combined data rates of numerous pre-EOS-era (including UARS and TOPEX) and foreign satellites to demonstrate that the magnitude of potential research opportunities for EOS is much greater than for other combinations of Earth-sensing satellites. In its handbooks, NASA depicted the data streams flowing from the two groups of satellites to “10,000 users” in the 1993 edition and a more vaguely defined “user community” in the 1995 edition. In place of the broadly defined “users” and user community, we used the actual number of currently funded EOS, UARS, and TOPEX investigations to illustrate (1) that the magnitude of potential EOS basic research opportunities is much greater than those afforded by UARS and TOPEX (as indicated by their respective data rates) and (2) that the number of currently funded EOS investigations is small compared to the number of currently funded UARS and TOPEX investigations. UARS, launched in September 1991, consists of 10 instruments that are measuring the composition and temperature of the upper atmosphere, atmospheric winds, and energy from the sun. The UARS science investigations are led by 22 teams. NASA broadened the UARS science investigations in 1994 by selecting 40 additional teams led by “guest” investigators. It is also funding correlative measurement investigations led by 38 teams to develop an independent database to validate and complement measurements made by UARS’ instruments. In the EOS era, solar energy and atmospheric chemistry measurements will be made principally by the ACRIM, SAGE, and SOLSTICE instruments and the CHEM spacecraft. Currently, only 12 instrument and interdisciplinary science investigations are associated with these instruments and the CHEM spacecraft. In contrast, UARS supports research conducted by 62 instrument and science teams. TOPEX was launched in August 1992 to study the circulation of the world’s oceans. The primary instrument is an altimeter that measures the height of the satellite above the ocean, wind speed, and wave height. NASA and its French partner, Centre National d’Etudes Spatiales, selected 38 science investigations. The 38 TOPEX-related science teams have about 200 members, and NASA plans to solicit additional investigations. In the EOS era, the follow-on mission to TOPEX is Radar-ALT. An instrument team has not yet been selected, but only 7 of the 29 interdisciplinary science investigations currently plan to use Radar-ALT data. There is a large difference between the number of (1) currently funded EOS investigations and the expected volume of data from EOS and (2) the currently funded UARS and TOPEX investigations and volume of data of these two pre-EOS missions. The combined number of the UARS and TOPEX science investigations is a little larger than the current number of EOS investigations, even though EOS’ data rate (our indicator of the magnitude of potential research opportunities) is close to 1,000 times greater than the combined data rate of UARS and TOPEX. EOS will provide up to 42 million bits of data per second to 49 interdisciplinary science and instrument investigations. The corresponding ratio for UARS and TOPEX is a total of 48 thousand bits of data per second to 60 investigations. The National Research Council’s Board on Sustainable Development reviewed the U.S. Global Change Research Program, Mission to Planet Earth, and EOS in 1995 and stated that one of the “fundamental guiding principles” of the U.S. Global Change Research Program is an “open and accessible program” that will “encourage broad participation” by the government, academic, and private sectors. Some NASA officials and EOS investigators are concerned that the Earth sciences research community perceives EOS’ science teams as a “closed shop,” whereby membership on a current team is a precondition for conducting future EOS-related research. To counter this perception, NASA’s current strategy to expand the EOS research community involves (1) an open data access policy and (2) efforts to broaden and change the current community by adding investigations, reevaluating the current science investigations, and recruiting new investigators. A vital part of the EOS data policy is that EOS data will be available to everyone: there will be no period of exclusive access for funded investigators. This has not always been NASA’s policy. On some past Earth observing missions, funded investigators had exclusive use of the data for an extended period of time. For example, the original investigators associated with the Upper Atmosphere Research Satellite had exclusive access to the first year’s data for up to 2 years. EOS data users as a rule will not be charged more than the cost of distributing data to them. The data policy contemplates a variety of potential user groups, not all of whom will be engaged in basic research. In 1995, NASA sponsored a conference to better define the user groups. The conferees identified 12 potential user groups, of which only 3 were primarily composed of scientists. The others included commercial users, resource planners, and educational groups. NASA officials stated that about 10,000 Earth scientists might use EOS-related data. Even with the large size of this potential research community and the open-access data policy, the sufficiency of EOS investigations might appear to be the least of NASA’s problems. Even though 10,000 Earth scientists may be potential users of EOS data, they still need to be funded to conduct basic research. According to NASA officials, as a general rule, for this type of work, scientists analyze data when they are paid to do so. We sought to confirm this observation by reviewing the authorship of 172 journal articles about 2 pre-EOS-era satellites—UARS and TOPEX/Poseidon. Our review showed that publicly funded investigators wrote all but 10 of the articles. We reviewed the authorship of UARS and TOPEX articles published in scientific journals from the approximate dates of launch through May 1995; these articles were selected from a database consisting of about 4,500 periodicals. The principal investigators wrote 123 (72 percent) of the 172 articles. In addition, we identified two other kinds of investigators probably associated with the principal investigators and/or government funded—that is, investigators associated with the principal investigator’s institution (most often a university or government agency) or another government agency. These “associate” investigators wrote 39 (23 percent) of the journal articles. Not all people who get Earth sciences data use it to do basic research. For example, from January through May 1995, NASA’s Jet Propulsion Laboratory sent 55,521 TOPEX-related data files to 28,495 requesters through the Internet. This figure does not necessarily represent separate requesters. The laboratory does not know how these requesters use TOPEX data, but according to a laboratory official, data accessed through the Internet is generally not sufficient for doing basic research. Investigators want less processed data for this type of research. NASA originally solicited proposals for EOS interdisciplinary science and instrument investigations in January 1988. The solicitation noted that NASA planned to fund 10 to 20 science investigations, with other selections possible before the launch of the first EOS platform, then scheduled for late 1995. NASA received 458 proposals in response to its solicitation, including about 250 for interdisciplinary science investigations. As previously noted, 29 interdisciplinary science and 20 instrument investigations are being funded by NASA and its international partners. The lifetime of the science investigations was to extend for 4 years beyond the launch of the first satellite, or until 1999. In other words, NASA intended to add to this first group of investigations over a 10-year period (1989 to 1999). However, at a minimum, the lifetime of this first group of investigations has been extended to 13 years (1989 to 2002, including 4 years beyond AM-1’s 1998 launch date). NASA’s plan to supplement the first group of science investigations with a second group within 6 years was not too optimistic given its funding expectations at that time. NASA’s EOS mission planning (1982-87) took place during a time of expanding resources. During the 1980s, NASA’s funding increased each year, essentially doubling from about $5 billion to $10 billion between fiscal years 1981 and 1989. NASA has recognized that more EOS investigations are needed, and last year it took a first step to add more. NASA solicited proposals in September 1995 to address, among other things, specific interdisciplinary science issues that are not well covered by existing NASA-funded investigations. It received 134 interdisciplinary science proposals and hopes to add 20 to 25 investigations with grants of about $250,000 to $400,000 per year for a period of up to 3 years. NASA is funding the interdisciplinary science part of the September 1995 solicitation with a $9-million “funding wedge” created, in part, from reductions in the previously planned funding levels for some existing EOS investigations. According to a NASA official, no new money will be used to fund these investigators. It remains to be seen if NASA’s ability to generate future savings in the program will become a major factor in increasing the number of EOS investigations. Although potentially useful over the longer term, these grants will not immediately increase the number of EOS investigations in the near term because the announcement largely precludes investigators from analyzing data from the first EOS mission, AM-1, which is now scheduled for launch in 1998. Instead, NASA is asking for proposals on interdisciplinary research that primarily uses existing data sets from past satellite missions and field experiments.The nature and membership of the EOS science teams has largely remained unchanged for 6 years. According to NASA officials, this longevity has created a perception among some Earth scientists that currently funded investigators constitute a “closed shop.” NASA attempted to correct this perception by conducting an internal program review in 1992 and 1993 and an external peer review in 1995 and 1996. The review by EOS investigators’ peers in the Earth sciences research community is not yet finished, but it could lead to the possible deselection and recompetition of some EOS interdisciplinary science teams. NASA opted for the peer review, rather than have all the current investigations reevaluated as part of a new solicitation for proposals. NASA’s 1992-93 program review found weaknesses in many interdisciplinary science teams. The reviewers generally found that only 30 percent of 23 investigations could be rated “successful” in terms of science-related assessment measures. They also noted that “most teams need work in documenting their scientific progress, plans, and the policy relevance of their research to the Earth Science community, as well as to NASA.” The reviewers specifically noted that 67 percent (of 24 teams) had poor management plans, 61 percent (of 23 teams) had a less than satisfactory publication record, and 57 percent (of 23 teams) needed to improve their contacts with the EOS instrument teams. The review concluded that “for most teams, the biggest factor hindering their success is their lack of a good management plan—teams that do not have their own house in order will not benefit from increased collaborations” with other interdisciplinary and instrument teams. In October 1994, the Science Executive Committee of the EOS Investigator Working Group endorsed the need for a peer review and possible turnover of teams, if this would enhance the quality of EOS investigations. The Committee, however, rejected the idea that the existing investigations should be evaluated through a new competition. It noted that a new competition could cause a loss of credibility with EOS supporters and that many interdisciplinary science teams had committed themselves “far beyond” just their science tasks. In contrast, NASA struck a different balance between continuity and change in the pre-EOS-era U.S.-Japan Tropical Rainfall Measuring Mission. The goal of the spacecraft’s three principal instruments is to measure rainfall more accurately than before, particularly over the tropical oceans. The science of a long-term investigatory group was reevaluated after 3 years by holding a new funding competition for this program. NASA and Japan’s National Space Development Agency first solicited research proposals in 1990 for a possible launch in 1994. Both agencies selected a total of 35 investigators. The two space agencies in October 1993 again solicited research proposals for a launch now scheduled for 1997. The space agencies selected 27 of the original investigators and added 12 new investigators to the science team. The long-term growth of the EOS research community depends, in part, on NASA’s ability to recruit graduate students and newly graduated Earth scientists to use remotely sensed data. NASA supports prospective researchers in the Earth sciences through the graduate student Global Change Fellowship program. Successful candidates can be funded for up to 3 years, at $20,000 per year, primarily for tuition support and living expenses. NASA supported 112 fellowships for the 1993-94 academic year. In September 1995, NASA also established a new investigator program as part of Mission to Planet Earth and solicited proposals for 10 to 15 interdisciplinary investigations from recent Ph.D. recipients. The proposed investigations must be based on data from existing satellite missions. NASA received 65 proposals in response to this solicitation. while some of the multi-year reductions may be accomplished without serious effect on the program, it must be stated that the achievement of several essential elements (e.g., continuity of observations for 15 years) of the program are now at significantly greater risk. Despite this apprehension, most interdisciplinary science investigators have experienced or expect little or no effect of budgetary turbulence on their own research. In the 1992-93 program review, NASA’s investigators were generally optimistic that they could withstand EOS’ continuing budgetary turbulence. In 1995, investigators reaffirmed this optimism. As part of the 1992-93 program review, NASA asked EOS’ interdisciplinary science principal investigators to evaluate the effect changes to EOS would have on their work. The reviewers classified the 23 responding investigators’ remarks as follows: no effect (11 investigators, 48 percent); minor effect (8 investigators, 35 percent); and major effect (4 investigators, 17 percent). The program review followed the cancellation of three major EOS instruments over several years: Laser Atmospheric Wind Sounder (observation of lower atmospheric winds); High-Resolution Imaging Spectrometer (identification of surface composition); and Synthetic Aperture Radar (high-resolution global measurements of the Earth’s surface). Whether scientists planned to use a canceled instrument was a major part of how they perceived the impact on their work. Some investigators also cited changes to their ongoing research resulting from little or no growth in most of their fiscal year 1994 budgets. According to a NASA official, only seven investigations received as much as a 10-percent increase in their 1994 budget above the amount for fiscal year 1992. One investigator, citing a flat budget for 1994, said that as a result, coinvestigators could not give full attention to EOS-related research and that it was “difficult for us to contemplate an accelerated or broadened attack on the global change problems we are addressing.” Another investigator noted that such a budget meant that “some research tasks have to be trimmed” and would not “allow much flexibility in terms of new ideas and initiatives.” In 1995, NASA again asked the interdisciplinary science principal investigators to assess how changes over the previous 3 years to the EOS program had affected their future and ongoing research. The scientists cited the same mix of concerns as they had previously—namely, the loss of several instruments and lack of growth in their funding. One investigator noted that a 20-percent budget reduction in 1994 “decimated our attempts to carry out field studies in collaboration with team members.” His view, however, was unique. Most investigators reported that the changes had so far created only relatively minor problems that could be adequately resolved. A NASA official told us that a reason for investigators’ optimism is that NASA officials consciously tried to minimize the impact of budget reductions on EOS-related science. Starting in 1996, NASA plans to solicit additional Earth science research through a new Earth System Science Pathfinder program. This effort will be based on data sets collected by new satellite missions. According to NASA officials, the Pathfinder program is intended to develop quick turnaround, low-cost space missions for high priority Earth sciences research not being addressed by current programs, including EOS, thus providing an opportunity to accommodate new science priorities and to increase scientific participation in Mission to Planet Earth. The administration is requesting $20 million for Pathfinder in fiscal year 1997 and plans to request $30 million, $75 million, and $75 million for fiscal years 1998, 1999, and 2000, respectively—a total of $200 million over the next 4 fiscal years. After then, NASA plans to offset Pathfinder’s funding requirements with reductions generated from the introduction of lower cost technology into future Mission to Planet Earth-related research. Pathfinder’s goal is to launch one mission every year, starting in 1999. NASA estimates the life-cycle cost of each mission would not exceed $120 million and would include the cost of the launch vehicle, civil service labor, investigator support, and 2 years of spacecraft operations.However, NASA has not demonstrated that the potential value of Pathfinder’s science would exceed the potential value of additional EOS-related science, if savings allocated to Pathfinder were allocated to EOS science. NASA criticized our analysis and conclusions. NASA stated that our draft report underestimated the size of the EOS research community and the abilities of EOS investigators to process the large amount of data expected from EOS. We do not agree with NASA’ s description of our report’s focus and scope. Our objective was not to estimate the size of the EOS research or broader user communities, or to assess the abilities of current researchers to handle the large amount of data expected from EOS. Rather, our objective was to assess NASA’s plans for developing its basic research community, with specific focus on the number of currently funded EOS investigations. This issue is the basis for the majority of NASA’s concerns. To address NASA’s point, we revised our final report to clarify the specific focus and scope of our work. NASA said that our analysis of the number of EOS investigations did not consider the broader user community. Although NASA’s statement is correct, it was not the objective of our work to analyze the broader user community. We focused on comparing the number of NASA’s currently funded EOS-related investigations with the number of funded investigations associated with two pre-EOS-era missions. This comparison constituted our analytic framework and formed the basis of our conclusion that the magnitude of potential basic research opportunities afforded by EOS is much greater than those afforded by UARS and TOPEX, but the number of currently funded EOS investigations is relatively small compared to the number of investigations funded under the two pre-EOS-era missions. Our conclusion is consistent with NASA’s desire, as expressed in its comments on our draft report, “to expand the size of the direct EOS community,” and its actions during the course of our review to increase the number of EOS investigations in a budget-constrained environment. NASA’s comments also addressed our concern about its ability to increase the number of EOS investigations based on savings from EOS and other parts of Mission to Planet Earth. NASA stated that it has already made changes to lower EOS’ costs and that it will be able to decrease costs further while improving overall capability and maintaining data continuity. We have not evaluated NASA’s claims in this regard. In our draft report, we recommended that the NASA Administrator provide the Congress with an assessment of Pathfinder’s potential impact on NASA’s strategy for Earth system science research, including a determination that the potential value of Pathfinder’s investigations is expected to exceed the potential value of additional EOS investigations. NASA generally agreed with this recommendation, stating that it would provide a strategic assessment of Pathfinder. NASA also said it planned to proceed with the Pathfinder missions on the basis of already having analyzed the tradeoffs and having had its approach validated by outside review groups. The concern that prompted our recommendation in the draft report was the availability of adequate funding for EOS basic research given NASA’s funding strategy. That continues to be our concern and, in view of NASA’s position, we are changing our recommendation to the NASA Administrator to a matter for congressional consideration. Our purpose in making this change is to alert the Congress to the need to address the EOS funding issue before substantial funding commitments are made to the new Pathfinder program. NASA’s comments are in appendix V. In judging the extent to which it should support the proposed Earth System Science Pathfinder program, the Congress may wish to have NASA demonstrate that the potential value of Pathfinder investigations will exceed the potential value of additional EOS investigations that could be obtained with the same resources. To accomplish our objectives, we obtained documents related to EOS’ science program from and interviewed officials at NASA headquarters in Washington, D.C.; NASA’s Goddard Space Flight Center, Greenbelt, Maryland; and at the Jet Propulsion Laboratory, Pasadena, California. We attended the EOS Investigators Working Group meeting in June 1995 in Santa Fe, New Mexico, and the Payload Panel meeting in November 1995 in Annapolis, Maryland. In analyzing the development of the EOS research community, we reviewed information on pre-EOS Earth science ground- and space-based research, as well as EOS’ interdisciplinary science research. In analyzing the authorship of articles related to UARS and TOPEX/Poseidon, we used “Scisearch,” an international, multidisciplinary index to science literature. Scisearch indexes articles from approximately 4,500 scientific and technical journals. We used the scientists’ progress reports for 1992 to 1993, and 1995 to assess whether changes to EOS have adversely affected EOS’ interdisciplinary research. We performed our work between February 1995 and February 1996 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of the report until 30 days from its issue date. At that time, we will send copies to other appropriate congressional committees; the NASA Administrator; and the Director, Office of Management and Budget. We will also make copies available to other interested parties upon request. Please contact me on (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report were Brad Hathaway, Frank Degnan, Thomas Mills, Richard Eiserman, and Richard Irving. The National Aeronautics and Space Administration (NASA) considers the following measurement sets to be critical to preserving the Earth system science approach of the Earth Observing System (EOS) and important to making environmental policy decisions. The information in appendixes I-IV was derived from NASA sources. The formation, dissipation, and radiative properties of clouds influence the atmosphere’s response to greenhouse forcing (i.e., mechanisms that promote the greenhouse effect). The net effect of cloud forcing and feedback determines the energy budget of Earth and its cozy temperature, which supports life. Earth’s radiation budget drives the biological and physical processes of the atmosphere, land, and ocean, which in turn affect water resources, agriculture, and food production. There is a net outflow of atmospheric moisture from the tropics to the higher latitudes. This redistribution is accomplished through evaporation and precipitation, which determine the freshwater resources for agricultural and industrial development. Tropospheric chemistry is linked to the circulation of Earth’s water (the “hydrologic cycle”), the ecosystem, and transformations of greenhouse gases in the atmosphere, thus determining the oxidizing capacity of the atmosphere for cleansing pollutants. Stratospheric chemistry measurements involve chemical reactions, interactions between the sun and the atmosphere, and the sources and sinks of gases, such as ozone, that are critical to Earth’s radiation balance. An aerosol is a fine solid or liquid particle suspended in gas, such as the atmosphere. Aerosols affect the climate through their radiative properties by serving as nuclei for the condensation of clouds. Aerosols tend to cool Earth’s atmosphere, thus offsetting some of the warming effects of greenhouse gases. Along with atmospheric humidity, atmospheric temperature is used in short-term weather prediction and long-term climate monitoring. Improved measurement accuracy, precision, and spatial and temporal coverage will enhance weather prediction skills beyond current limits and reduce weather prediction “busts,” or failures. See “Atmospheric Temperature.” Lightning measurements will include the distribution and variability of both cloud-to-cloud and cloud-to-ground lightning. Electrical discharge contributes to the formation and dissipation of certain trace gases in the atmosphere. Sustained changes in the total radiation output from the sun could contribute to significant climate changes on Earth over time. Solar radiation is the main source of energy for biological activities on Earth. Out of the entire spectrum of radiation that Earth receives from the sun, the ultraviolet portion is the dominant energy source for the Earth’s atmosphere. Small changes in the radiation field have an important effect on atmospheric temperature, chemistry, structure, and dynamics. Excess ultraviolet energy on the Earth’s surface is harmful to living organisms. Land use includes monitoring crops for efficient irrigation and pest control, public lands for good stewardship, and urban areas for development. Some changes in land use, such as deforestation and biomass burning, reduce the standing stock of vegetation, release carbon dioxide into the atmosphere, and reduce the capacity for the removal of carbon dioxide from the atmosphere. Terrestrial vegetation absorbs atmospheric carbon dioxide by photosynthesis to offset its greenhouse warming effect. Terrestrial surface temperature controls the formation and distribution of atmospheric water vapor and also contributes to the determination of cloud amount. In addition, surface temperatures control the biological activity and health of agricultural fields, forests, and other natural ecosystems. Biomass burning releases carbon dioxide into the atmosphere and also increases concentrations of other harmful gases, such as carbon monoxide and nitrogen oxides. Land cover monitoring can be used to assess potential fire hazards and monitor fire recovery in natural ecosystems. The volcanic ejection of aerosols and particulates into the atmosphere can increase precipitation and ozone destruction and cause the lowering of global temperatures. Volcanic activities also contribute to the formation of continents. Surface wetness controls the availability of fresh water resources for agricultural and industrial activities. Sea surface temperature measurements are important to understanding heat exchange between the ocean and the atmosphere. Such an understanding will contribute to the development of accurate general circulation models, which enhance our understanding of seasonal and interannual climate variations that contribute to hurricanes, floods, and other natural hazards. Planktonic marine organisms and dissolved organic matter play a major role in the carbon cycle, as they incorporate, or “fix,” about as much carbon as land plants. This contributes to removing carbon dioxide from the atmosphere and to offsetting the greenhouse effect. Surface winds over the oceans contribute to ocean circulation and the interaction between the air and sea, which affect short-term and long-term climate variations. Sea height and ocean circulation are related. Ocean circulation transports water, heat, salt, and chemicals around the planet. Accurate information about these circulation patterns should contribute to understanding the oceans’ impact on weather, climate, and marine life, and thus the fisheries industry and other maritime commerce. Measurements of the polar ice caps, including ice sheet elevation and ice volume, will determine the contribution of the ice sheets to sea-level variation. These data will also contribute to understanding the role of the polar ice caps in Earth’s freshwater and energy budgets, as well as climate fluctuations. Measurements of the extent and thickness of sea ice will help determine atmospheric warming. Sea ice measurements will also be useful to operational ice forecasting centers, thus affecting maritime commerce. Snow cover, extent, and duration determine fresh water resources, especially in Alpine regions of the world. Active Cavity Radiometer Irradiance Monitor (ACRIM) monitors the variability of total solar irradiance. Atmospheric Infrared Sounder (AIRS) measures atmospheric temperature and humidity. AMSR (Japan) Advanced Microwave Scanning Radiometer (AMSR) observes atmospheric and oceanic water vapor profiles and determines precipitation, water vapor distribution, cloud water, sea surface temperature, sea ice, and sea surface wind speed. Advanced Microwave Sounding Unit (AMSU) measures atmospheric temperature. ASTER (Japan) Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) provides high spatial resolution images of the land surface, water, ice, and clouds. Clouds and the Earth’s Radiant Energy System (CERES) measures Earth’s radiation budget and atmospheric radiation. DFA (France) Dual Frequency Altimeter (DFA) maps the topography of the sea surface and its impact on ocean circulation. Earth Observing Scanning Polarimeter (EOSP) globally maps radiance and linear polarization of reflected and scattered sunlight to measure atmospheric aerosols. Enhanced Thematic Mapper Plus (ETM+) provides high spatial resolution images of the land surface, water, ice, and clouds. Geoscience Laser Altimeter System (GLAS) measures ice sheet topography, cloud heights, and aerosol vertical structure. HIRDLS (UK-US) High-Resolution Dynamics Limb Sounder (HIRDLS) observes gases and aerosols in the troposphere, stratosphere, and mesosphere to assess their role in the global climate system. Landsat Advanced Technology Instrument (LATI) provides high spatial resolution images of the land surface, water, ice, and clouds beyond Landsat ETM+. Lightning Imaging Sensor (LIS) measures the distribution and variability of lightning. Microwave Humidity Sounder (MHS) provides atmospheric water vapor profiles. Multi-Angle Imaging Spectroradiometer (MISR) measures the top-of-the-atmosphere, cloud, and surface angular reflectance. Microwave Limb Sounder (MLS) measures chemistry from the upper troposphere to the lower thermosphere. (continued) Moderate-Resolution Imaging Spectroradiometer (MODIS) studies biological and physical processes in the atmosphere, the oceans, and on land. MOPITT (Canada) Measurements of Pollution in the Troposphere (MOPITT) measures upwelling radiance to produce tropospheric carbon monoxide profiles and total column methane. Microwave Radiometer (MR) provides atmospheric water vapor measurements for DFA. ODUS (Japan) Ozone Dynamics Ultraviolet Spectrometer (ODUS) measures total column ozone. Stratospheric Aerosol and Gas Experiment III (SAGE III) provides profiles of aerosols, ozone, and trace gases in the mesosphere, stratosphere, and troposphere. Provides all-weather measurements of ocean surface wind speed and direction. Solar Stellar Irradiance Comparison Experiment (SOLSTICE) measures full-disk solar ultraviolet irradiance. Tropospheric Emission Spectrometer (TES) provides profiles of all infrared active species from Earth’s surface to the lower stratosphere. Mission continues Landsat land-imaging satellite series. Future Landsat-type instrument is planned for AM-2 and AM-3. Morning equator-crossing mission (AM series) will study clouds, aerosols, and radiation balance; the terrestrial ecosystem; land use; soils; terrestrial energy/moisture; tropospheric chemical composition; volcanoes; and ocean productivity. ASTER and MOPITT will be on AM-1 only. EOSP and LATI will be on AM-2 and AM-3 only. Afternoon equator-crossing mission (PM series) will study cloud formation, precipitation, and radiative properties; air-sea fluxes of energy and moisture; sea-ice extent; and ocean primary productivity. The PM series will carry prototypes of future operational weather satellite instruments. Chemistry mission (CHEM series) will study atmospheric chemical composition; chemistry-climate interactions; and air-sea exchange of chemicals and energy. ODUS will be on CHEM-1 only. A later CHEM flight may include SAGE III. Laser altimeter mission (LaserALT series) will study ice sheet mass balance. Radar altimeter mission (RadarALT series) will study ocean circulation. RadarALT is a joint mission with France. SAGE III instrument carried on International Space Station (ISS) and Russian Meteor satellite will study distribution of aerosols, ozone profiles, and greenhouse gases in the lower stratosphere. Tropical Rainfall Measuring Mission (TRMM) will study precipitation and Earth radiation budget in the tropics and high latitudes. TRMM is a joint mission with Japan. Japanese Advanced Earth Observing System II (ADEOS II) satellite carrying NASA scatterometer instrument will study ocean surface wind vectors. Mission will monitor the variability of total solar irradiance and is currently planned to fly on a series of small satellites. Mission will study Earth’s radiation budget and atmospheric radiation. Mission will study full-disk solar ultraviolet irradiance. 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 Timeline bars denote periods during which at least one copy of the indicated instrument is in orbit. EOS science objectives are listed below, along with the interdisciplinary investigations designed to address them. These investigations are intended to cross discipline boundaries, and therefore, address more than one science objective. The Water and Energy Cycles objective covers the formation, dissipation, and radiative properties of clouds, which influence the atmosphere’s response to greenhouse forcing. In addition, Water and Energy Cycles include large-scale hydrology and moisture processes, such as precipitation and evaporation. National Center for Atmospheric Research Project to Interface Modeling on Global and Regional Scales With EOS Observations. This investigation is intended to use surface and atmospheric data sources to improve climate models and their predictions of global change. Components of climate models to be addressed include surface-atmosphere interactions, the hydrologic cycle, global energy balance, cloud and aerosol radiative fields, and atmospheric chemical cycles. Climate Processes Over the Oceans. Climate is strongly influenced by the amount and distribution of water vapor, liquid water, and ice suspended in the atmosphere. This atmospheric water, and the climate over land areas, is largely controlled by processes occurring over the oceans. This investigation will improve modeling of both the atmosphere and its interactions with the ocean. It will address the roles of circulation, clouds, radiation, water vapor, and precipitation in climate change as well as the role of ocean-atmosphere interactions in the energy and water cycles. Hydrologic Processes and Climate Interdisciplinary Investigation. The global water and energy cycles link the atmosphere, land, and ocean. In addition, water supports life and plays a crucial role in climate regulation. This investigation is to enhance our understanding of the physical processes that affect these cycles. The Processing, Evaluation, and Impact on Numerical Weather Prediction of AIRS, AMSU, and MODIS Data in the Tropics and Southern Hemisphere. This investigation involves the development of algorithms and techniques to improve atmospheric science, specifically numerical weather prediction models, using three EOS instruments. Investigation of the Atmosphere-Ocean-Land System Related to Climate Processes. The atmosphere, ocean, and land interact with each other through the exchanges of heat energy, momentum, and water substance. These interactions influence climate. This investigation will examine the atmosphere-ocean-land system by pursuing seven supporting studies that will involve both observations and modeling. The Development and Use of a Four-Dimensional Atmospheric-Ocean-Land Data Assimilation System for EOS. This investigation will incorporate all available data, from a variety of sources, into a single model of the Earth system. This model can then be used to project the Earth system beyond the range of actual observations, estimate expected values of observations to assess instrument quality, provide products for environmental studies, and supplement observations by estimating quantities that are difficult or impossible to observe. An Interdisciplinary Investigation of Clouds and the Earth’s Radiant Energy System: Analysis. This investigation will examine the role of clouds and radiative energy balance in the climate system. Studies include cloud feedback mechanisms that can greatly modify the response of the climate system to increased greenhouse gases. The Oceans objective covers the exchange of energy, water, and chemicals between the ocean and atmosphere, and between the upper layers of the ocean and the deep ocean. Coupled Atmosphere-Ocean Processes and Primary Production in the Southern Oceans. The southern ocean plays an important role in both the carbon cycle and heat exchange between the ocean and atmosphere. This investigation will focus on developing predictive models so we can better understand the effects of changes in the physical forcing of the ocean (e.g., small shifts in the location of westerly wind systems may affect ocean processes). Biogeochemical Fluxes at the Ocean/Atmosphere Interface. Solar radiation impinging on the oceans creates chemical, physical, and biological effects. One result is the creation of gases, such as carbon dioxide, dimethyl-sulfide, and carbon monoxide, which are then circulated by wind and water. This investigation will develop models to better understand these gases and the influence of oceanic processes upon them. Interdisciplinary Studies of the Relationships Between Climate, Ocean Circulation, Biological Processes, and Renewable Marine Resources. This investigation will study (1) the ocean’s role in climate change, particularly in the Australian region; (2) the influence of the carbon cycle in Australia’s waters on the global carbon cycle; and (3) changes in Australian oceanography and the implications for marine ecosystems, including commercial fisheries. The Role of Air-Sea Exchanges and Ocean Circulation in Climate Variability. Exchanges of water, momentum, and heat at the interface of the ocean and atmosphere drive the transport and change the storage of heat, water, and greenhouse gases, thus moderating the world’s climate. This investigation will study these exchanges and ocean circulation in order to improve our understanding of natural global changes and enable us to discern human-induced effects. Polar Exchange at the Sea Surface: the Interaction of Ocean, Ice, and Atmosphere. This is an investigation of energy exchanges in Earth’s polar regions, both at the atmosphere-ice-ocean interface and lower latitudes. It will study the role these processes play in global oceanic and atmospheric circulation and help improve our understanding of whether polar regions show any sign of climate change. Middle and High Latitude Oceanic Variability Study. This investigation will examine the variability of the atmosphere’s influence on the oceans, the effect on the oceanic response, and the resulting effect on biological productivity in the oceans. The study will focus on the mid- to high-latitude regions of the oceans. It will examine changes in the surface fluxes of momentum, heat, water, and radiation, as well as the variability of ocean circulation and biological activity. Earth System Dynamics: the Determination and Interpretation of the Global Angular Momentum Budget Using EOS. Momentum and mass transport among the atmosphere, oceans, and solid Earth produce changes in the planet’s rotation and gravity field. Predictions of these changes based on the mass and motion of air and water can be compared with observations to improve models of the interactions of the oceans, atmosphere, and solid Earth. This investigation will examine these interactions as represented by the exchange of angular momentum, mass, and energy among these components. The Chemistry of the Troposphere and Lower Stratosphere objective includes links to the hydrologic cycle and ecosystems, transformations of greenhouse gases in the atmosphere, and interactions inducing climate change. Interannual Variability of the Global Carbon, Energy, and Hydrologic Cycles. Analysis of the carbon, energy, and water cycles may increase the predictability of climate change. The goals of this investigation are to (1) understand contemporary climate variability and trends and (2) contribute to our ability to predict the impact of human activities on the climate. Changes in Biogeochemical Cycles. Models of biogeochemical cycles can be used to project the interactions of atmospheric composition, climate, terrestrial and aquatic ecosystems, ocean circulation and sea level, and human-induced effects. This investigation will develop models and databases to describe the dynamics of water, carbon, nitrogen, and trace gases over seasonal-to-century time scales. The Land Surface Hydrology and Ecosystem Processes objective covers sources and sinks of greenhouse gases, the exchange of moisture and energy between the land surface and atmosphere, and changes in land cover. Investigations in this category could result in improved estimates of runoff over the land surface and into the oceans. Global Water Cycle: Extension Across the Earth Sciences. The global water cycle stimulates, regulates, and responds to the other components of the Earth system on regional and global scales. This investigation is aimed at developing a hierarchy of models, using EOS data, that will contribute to our understanding of cloud cover and radiative transfer, as well as energy and moisture changes at the interface of the atmosphere with the oceans, cryosphere, and land surface. These models will contribute to the prediction of changes in water balance and climate. Long-Term Monitoring of the Amazon Ecosystems Through EOS: From Patterns to Processes. Natural and human-induced changes in the Amazon are expected to disrupt regional vegetation distributions, alter the physical and chemical characteristics of the continental river system, and change regional hydroclimatology, possibly influencing global climate patterns. The aim of this investigation is to understand the circulation of water, sediment, and nutrients through the basin. Northern Biosphere Observation and Modeling Experiment. Natural and human-induced climate changes in the northern latitudes will affect terrestrial ecosystems, and feedbacks from these changing systems will influence the climate. The goal of this study is to better understand the relationship between the climate and northern ecosystems over a range of spatial scales. Hydrology, Hydrochemical Modeling, and Remote Sensing in Seasonally Snow-Covered Alpine Drainage Basins. Seasonally snow-covered Alpine regions are important to the hydrologic cycle, as they are a major source of water for runoff, ground water recharge, and agriculture. This investigation will monitor conditions in Alpine basins and develop models to better understand the cycling of water, chemicals, and nutrients in these areas. Climate, Erosion, and Tectonics in Mountain Systems. In mountain belts, climatic and tectonic processes produce Earth’s highest rates of weathering and erosion. Alpine regions are important to downstream hydrology, providing both inorganic and organic material to lowland areas. This investigation will observe the effects of climate changes on Alpine land processes and develop models to improve our understanding of these interactions. The Hydrologic Cycle and Climatic Processes in Arid and Semiarid Lands. Knowledge of the hydrologic cycle will help scientists predict the effects of natural and human-induced climate change. This investigation will study the hydrologic cycle and climatic processes in arid and semiarid lands, where agricultural productivity is especially sensitive to changes in the cycle. Using Multi-Sensor Data to Model Factors Limiting Carbon Balance in Global Arid and Semiarid Land. This investigation will address the role of arid and semiarid lands in processes affecting the global environment, such as the production and consumption of trace gases. It will also examine the vulnerability of these lands to climate change in terms of productivity and soil quality, and develop predictive models of ecosystem function for dry lands. Biosphere-Atmosphere Interactions. This investigation is to improve our understanding of the role of the terrestrial biosphere in global change. It will cover short-term interactions between the land and atmosphere, such as biophysics, as well as long-term interactions, such as ecology and human-induced impacts. The goal of the investigation is to understand and predict the response of the biosphere-atmosphere system to global change, specifically to the increase in atmospheric carbon dioxide. Glaciers and Polar Ice Sheet measurements could contribute to predictions of sea level and global water balance. Use of the Cryospheric System to Monitor Global Change in Canada. The cryosphere is an important component of the global climate system, and better understanding of cryospheric processes may improve global climate models. This investigation seeks to understand cryospheric variations, develop models that will improve our knowledge of the role of the cryosphere in the climate system, and use various cryospheric data sets to support climate monitoring and model development. The Chemistry of the Middle and Upper Stratosphere objective includes chemical reactions, solar-atmosphere relations, and sources and sinks of radiatively important gases. Observational and Modeling Studies of Radiative, Chemical, and Dynamical Interactions on the Earth’s Atmosphere. Understanding the circulation, transformations, and sources and sinks of gases, such as carbon dioxide, water vapor, ozone, and chlorofluorocarbons, is important in dealing with the issues of global warming, ozone depletion, and the coupling of atmospheric chemistry and climate. This investigation seeks to improve our understanding of the fundamental processes influencing these gases in the atmosphere and contribute to the development of a predictive capability for global change studies. Chemical, Dynamical, and Radiative Interactions Through the Middle Atmosphere and Thermosphere. Carbon dioxide and ozone play important radiative roles in the middle atmosphere. Ozone absorbs ultraviolet radiation, heating the middle atmosphere and shielding the biosphere from dangerous ultraviolet dosages. The interactions of other gases, as well as temperature and middle atmosphere circulation, affect ozone. This investigation will improve our understanding of interactions in the middle atmosphere and our ability to predict long-term atmospheric trends. Investigation of the Chemical and Dynamical Changes in the Stratosphere. Chemical changes in the atmosphere are occurring largely as a result of changes in the surface emission of trace gases. This investigation will focus on the response of ozone to trace gas changes, isolating natural from human-induced changes to determine their effects on ozone and to assess radiative and dynamical feedbacks. The Solid Earth objective deals with volcanoes and their role in climate change. A Global Assessment of Active Volcanism, Volcanic Hazards, and Volcanic Inputs to the Atmosphere from EOS. The injection of material from volcanoes into the atmosphere can affect the local or hemispheric climate. This investigation will improve our understanding of the processes behind volcanic eruptions; study the injection of sulfur dioxide, water vapor, carbon dioxide, and other gases into the atmosphere; and place eruptions into the context of the regional tectonic setting of the volcano. The following are GAO’s comments on NASA’s letter dated May 17, 1996. Investigations to begin in 1996 cannot depend solely on data from EOS instruments because the earliest launch dates are planned for 1997 (for ) and 1998 (for EOS-AM-1). Instead, research plans should be based on use of existing data sets . . . or expected data from relevant near-term (1996-1997) satellite missions and field experiments. 8. Our draft report discussed the 1995-96 peer review, noting that the review could lead to possible deselection and recompetition of some EOS science teams. 9. We incorporated the amount of funds involved in the program into the report’s text. 10. We included NASA’s current estimate of planned EOS spending through fiscal year 2000 in the report’s text. 11. We revised our report to include NASA’s comments about lower program baseline costs and its confidence that it will decrease costs further. 12. We deleted the material NASA is referring to. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the National Aeronautics and Space Administration's (NASA) plans for funding its Earth Observing System (EOS) and developing EOS-related basic research, focusing on: (1) the current number of EOS science investigations; (2) researchers' views on whether changes to EOS have adversely affected their ability to carry out their interdisciplinary earth sciences investigations; and (3) the Earth System Science Pathfinder program and its potential impact on future EOS investigations. GAO found that: (1) NASA funds 29 interdisciplinary science investigations that use data from EOS instruments in more than one earth science discipline; (2) to expand the EOS research community, NASA plans to maintain an open data access policy, add investigations, reevaluate current science investigations, and recruit new investigators; (3) most EOS interdisciplinary scientists believe that EOS budgetary reductions have little or no effect on their work; and (4) NASA plans to use anticipated savings resulting from improved technology to fund more investigations and request a total of $200 million over the next 4 fiscal years for its Earth System Science Pathfinder program. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Distinctions between cruise missiles and UAVs are becoming blurred as the militaries of many nations, in particular the United States, add missiles to traditional reconnaissance UAVs and develop UAVs dedicated to combat missions. A cruise missile consists of four major components: a propulsion system, a guidance and control system, an airframe, and a payload. The technology for the engine, the autopilot, and the airframe could be similar for both cruise missiles and UAVs, according to a 2000 U.S. government study of cruise missiles. Figure 1 shows the major components of a cruise missile. Cruise missiles provide a number of military capabilities. For example, they present significant challenges for air and missile defenses. Cruise missiles can fly at low altitudes to stay below radar and, in some cases, hide behind terrain features. Newer missiles are incorporating stealth features to make them less visible to radars and infrared detectors. Modern cruise missiles can also be programmed to approach and attack a target in the most efficient manner. For example, multiple missiles can attack instantaneously from different directions. Furthermore, land attack cruise missiles may fly circuitous routes to get to their targets, thereby avoiding radar and air defense installations. UAVs are available in a variety of sizes and shapes, propeller-driven or jet propelled, and can be straight-wing aircraft or have tilt-rotors like helicopters. They can be as small as a model aircraft or as large as a U-2 manned reconnaissance aircraft (see fig. 2). U.S. policy on the proliferation of cruise missiles and UAVs is expressed in U.S. commitments to the MTCR and Wassenaar Arrangement. These multilateral export control regimes are voluntary, nonbinding arrangements among like-minded supplier countries that aim to restrict trade in sensitive technologies. Regime members agree to restrict such trade through their national laws and regulations, which set up systems to license the exports of sensitive items. The four principal regimes are the MTCR; the Wassenaar Arrangement, which focuses on trade in conventional weapons and related items with both civilian and military (dual-use) applications; the Australia Group, which focuses on chemical and biological technologies; and the Nuclear Suppliers Group, which focuses on nuclear technologies. The United States is a member of all four regimes. Regime members conduct a number of activities in support of the regimes, including (1) sharing information about each others’ export licensing decisions, including certain export denials and, in some cases, approvals and (2) adopting common export control practices and control lists of sensitive equipment and technology into national laws or regulations. Exports of commercially supplied American-made cruise missiles, military UAVs, and related technology are transferred pursuant to the Arms Export Control Act, as amended, and the International Trafficking in Arms Regulations, implemented by State. Government-to-government transfers are made pursuant to the Foreign Assistance Act of 1961, as amended, and subject to DOD guidance. Exports of dual-use technologies related to cruise missiles and UAVs are transferred pursuant to the Export Administration Act of 1979, as amended, and the Export Administration Regulations, implemented by Commerce. Bureaus in two U.S. agencies are responsible for the initial enforcement of export control laws. The Bureau of Immigration and Customs Enforcement (ICE) in the Department of Homeland Security conducts investigations enforcing the Arms Export Control Act, which is administered by the State Department. ICE combines the enforcement and investigative arms of the Customs Service, the investigative and enforcement functions of the former Immigration and Naturalization Service, and the Federal Protective Service as part of the Department of Homeland Security. ICE shares responsibility with Commerce’s Bureau of Industry and Security for enforcing the Export Administration Act. ICE and the Bureau of Industry and Security use enforcement tools such as investigations of purported violations of law and regulation and interdictions of suspected illicit shipments of goods. Investigations can result in criminal prosecutions, fines, or imprisonment or in export denial orders, which bar a party from exporting any U.S. items for a specified period of time. The Arms Export Control Act, as amended in 1996, requires the President to establish a program for end-use monitoring of defense articles and services sold or exported under the provisions of the act and the Foreign Assistance Act. This requirement states that, to the extent practicable, end-use monitoring programs should provide reasonable assurance that recipients comply with the requirements imposed by the U.S. government on the use, transfer, and security of defense articles and services. In addition, monitoring programs, to the extent practicable, are to provide assurances that defense articles and services are used for the purposes for which they are provided. The President delegated this authority to the Secretaries of State and Defense. The proliferation of cruise missiles and UAVs poses a growing threat to U.S. national security. Both can be used to attack U.S. naval interests, the U.S. homeland, and forces deployed overseas. Cruise missiles and UAVs have significant military capabilities, including surveillance and attack, which the United States has demonstrated during military engagements in Afghanistan and Iraq. In addition, U.S. government projections show that the numbers of producers and exporters of cruise missiles and UAVs will increase and that more countries of concern will possess and begin to export them. The growing availability of these weapons, and of related components and technology that are not readily controllable, makes it easier for countries of concern and terrorists to acquire or build at least rudimentary cruise missile or UAV systems. Although cruise missiles and UAVs provide important capabilities for the United States and its friends and allies, in the hands of U.S. adversaries they pose substantial threats to U.S. interests. First, anti-ship cruise missiles threaten U.S. naval forces deployed globally. Second, land-attack cruise missiles have a potential in the long-term to threaten the continental United States and U.S. forces deployed overseas. Finally, UAVs represent an inexpensive means of launching chemical and biological attacks against the United States and allied forces and territory. Cruise missiles pose a current and increasing threat to U.S. naval vessels. For example, there are more than 100 existing and projected missile varieties (including sub- and supersonic, high and low altitude, and sea- skimming models) with ranges up to about 185 miles or more. We reported in 2000 that the next generation of anti-ship cruise missiles—most of which are now expected to be fielded by 2007—will be equipped with advanced target seekers and stealthy design. These features will make them even more difficult to detect and defeat. Land-attack cruise missiles pose a long-term threat to the U.S. homeland and U.S. forces deployed overseas. Because land attack cruise missiles suitable for long-range missions require sophisticated guidance and complicated support infrastructures, they have historically been found almost exclusively in superpower arsenals. According to an unclassified summary of a national intelligence estimate from December 2001, several countries are technically capable of developing a missile launch mechanism to station on forward-based ships or other platforms so that they can launch land-attack cruise missiles against the United States. Technically, cruise missiles can be launched from fighter, bomber, or even commercial transport aircraft outside U.S. airspace. According to the National Air Intelligence Center, defending against land attack cruise missiles will strain air defense systems. Moreover, cruise missiles are capable of breaking through U.S. defenses and inflicting significant damage and casualties on U.S. forces, according to the Institute for Foreign Policy Analysis’ October 2000 study. UAVs pose a longer-term threat as accurate and inexpensive delivery systems for chemical and biological weapons and are increasingly sought by nonstate actors, according to U.S. government and other nonproliferation experts. For example, the U.S. government reported its concern over this threat in various meetings and studies. The Acting Deputy Assistant Secretary of State for Nonproliferation testified in June 2002 that UAVs are potential delivery systems for WMD, and are ideally suited for the delivery of chemical and biological weapons given their ability to disseminate aerosols in appropriate locations at appropriate altitudes. He added that, although the primary concern has been that nation-states would use UAVs to launch WMD attacks, there is potential for terrorist groups to produce or acquire small UAVs and use them for chemical or biological weapons delivery. At least 70 nations possess some type of cruise missile, mostly short-range, anti-ship missiles armed with conventional, high-explosive warheads, according to a U.S. government study. Estimates of the total number of cruise missiles place the world inventory at a minimum of 75,000. Countries that export cruise missiles currently include China, France, Germany, Israel, Italy, Norway, Russia, Sweden, United Kingdom, and the United States. Nations that manufacture but do not yet export cruise missiles currently include Brazil, India, Iran, Iraq, North Korea, South Africa, and Taiwan. None of these nonexporting manufacturing countries is a member of the Wassenaar Arrangement, and only Brazil and South Africa are in the MTCR. The number of cruise missile exporters is expected to grow with producers such as India and Taiwan making their missiles available for export. Currently, at least 12 countries are believed to be developing land-attack cruise missiles; some of these new systems will be exported. France, for example, signed a deal with the United Arab Emirates (UAE) to export a type of cruise missile. By 2005, six countries of concern will have acquired land-attack capabilities, up from only three in 2000, according to the National Air Intelligence Center. Furthermore, cruise missile inventories are projected to increase through 2015 and one to two dozen countries probably will possess a land-attack cruise missile capability by this date, according to an unclassified National Intelligence Estimate. While both land-attack and anti-ship cruise missile inventories are projected to increase, land-attack cruise missile inventories are expected to experience a significantly higher percentage of growth. According to defense industry sources, interest has picked up dramatically from countries all over the world for acquiring and developing even the simplest UAV technology and is expected to continue, further diffusing this technology. Forty-one countries operate about 80 types of UAV, primarily for reconnaissance. Currently, some 32 nations are developing or manufacturing 250 models of UAVs. Several countries involved in key aspects of the UAV industry are not members of the MTCR. For example, 13 non-MTCR countries develop and manufacture UAVs. Countries that pose a threat of WMD proliferation concern, such as China, Russia, and Pakistan, are among the 32 countries developing and expected to export UAVs. Cruise missiles and UAVs can be acquired in several ways, including purchase of complete systems and conversion of existing systems into more capable weapons. Acquisition of commercially available dual-use technologies has made development of new systems and conversion of existing systems more feasible. Purchasing complete missile systems provides the immediate capability of fielding a proven weapon. Complete cruise missiles can be acquired from a variety of sources. For example, China and Russia have sold cruise missiles to Iran, Iraq, Libya, North Korea, and Syria. In addition, France has widely exported the Exocet, now in service in more than 29 countries. Israel, Italy, Norway, Sweden, the United Kingdom, and the United States have also exported anti-ship cruise missiles. Various government and academic studies have raised concerns that the wide availability of commercial items, such as global positioning system receivers and lightweight engines, allows both countries and nonstate actors to enhance the accuracy of their systems, upgrade to greater range or payload capabilities, and convert certain anti-ship cruise missiles into land-attack cruise missiles. Thus, less capable and expensive systems could be more easily improved to attack targets not currently accessible, especially on land. Although not all cruise missiles can be modified into land-attack cruise missiles because of technical barriers, specific cruise missiles can and have been. For example, a 1999 study outlined how the Chinese Silkworm anti-ship cruise missile had been converted into a land- attack cruise missile. Furthermore, the Iraq Survey Group reported in October 2003 that it had discovered 10 Silkworm anti-ship cruise missiles modified to become land-attack cruise missiles and that Iraq had fired 2 of these missiles at Kuwait. Many issues concerning modification of cruise missiles also apply to UAVs, according to one industry group. Larger UAVs are more adaptable to change. Although several experts said that it is more expensive and difficult to modify an existing aircraft into a UAV than to develop one from scratch, some countries, such as Iraq, developed programs to convert manned aircraft into UAVs. Some experts also expressed concerns over adding autopilots to small aircraft to turn them into unmanned UAVs that could deliver chemical or biological weapons. The U.S. government generally uses two key nonproliferation tools—- multilateral export control regimes and national export controls—to address cruise missile and UAV proliferation, but both tools have limitations. The United States and other governments have traditionally used multilateral export control regimes, principally the MTCR, to address missile proliferation. However, despite successes in strengthening controls, the growing capability of countries of concern to develop and trade technologies used for WMD limits the regime’s ability to impede proliferation. The U.S. government uses its national export control authorities to address missile proliferation but finds it difficult to identify and track commercially available items not covered by control lists. Moreover, a gap in U.S. export control regulations could allow subnational actors to acquire American cruise missile or UAV technology for missile proliferation or terrorist purposes without violating U.S. export control laws or regulations. The United States has other nonproliferation tools to address cruise missile and UAV proliferation—diplomacy, sanctions, and interdiction of illicit shipments of items—but these tools have had unclear results or have been little used. The United States and other governments have used the MTCR, and, to some extent, the Wassenaar Arrangement, as the key tools to address the proliferation of cruise missiles and UAVs. While the United States has had some success in urging these regimes to focus on cruise missiles and UAVs, new suppliers who do not share regime goals limit the regimes’ ability to impede proliferation. In addition, there has been less consensus among members to restrict cruise missiles and UAVs than to restrict ballistic missiles. The MTCR is principally concerned with controlling the proliferation of missiles with a range of 300 kilometers and a payload of 500 kilograms or with any payload capable of delivering chemical or biological warheads. MTCR members seek to restrict exports of sensitive technologies by periodically reviewing and revising a commonly accepted list of controlled items, such as lightweight turbojet and turbofan engines, or materials and devices for stealth technology usable in missiles. The Wassenaar Arrangement seeks to limit transfers of conventional arms and dual-use goods and technologies that could contribute to regional conflict. Military UAVs below MTCR capability levels of 300 kilometers range and 500 kilograms payload are included on the Wassenaar Munitions List. DOD officials noted that MTCR attempts to impede the proliferation of UAVs capable of delivering WMD, while Wassenaar covers conventional weapons and items with a military function. In recent years, the increased awareness of the threat of chemical and biological weapons and terrorists has increased members’ interest in cruise missile and UAV controls, according to State. MTCR control lists were revised between 1997 and 2002 to adopt six of eight U.S. proposals to include additional items related to cruise and UAV technologies. Members agreed in 2002 to adopt (1) expanded controls on small, fuel-efficient gas turbine engines, (2) a new control on integrated navigation systems, and (3) a new control on UAVs designed or modified for aerosol dispensing. At the September 2003 MTCR Annual Plenary, members agreed to add to the control list complete UAVs designed or modified to deliver aerosol payloads greater than 20 liters. In the Wassenaar Arrangement, the United States and other members during 2003 made several proposals for new controls related to UAVs and short-range cruise missiles and their payloads. Once changes are officially accepted, MTCR and Wassenaar members are expected to incorporate the changed control lists into their own national export control laws and regulations. While including an item on a control list does not preclude its export, members are expected to more carefully scrutinize listed items pending decisions on their export. They are also expected to notify other members when denying certain export licenses for listed items. Despite the efforts of these regimes, nonmembers such as China and Israel continue to acquire, develop, and export cruise missile or UAV technology. The growing capability of nonmember supplier countries to develop technologies used for WMD and trade them with other countries of proliferation concern undermines the regimes’ ability to impede proliferation. For example, China has sold anti-ship cruise missiles to Iran and Iraq (see fig. 3). Israel also reportedly sold the Harpy UAV to India, according to a Director of Central Intelligence report in 2003. In addition to the limitations posed by non-member suppliers, some nonproliferation experts and foreign government officials noted that MTCR’s effectiveness has been limited because there has been less consensus among MTCR and Wassenaar members to restrict cruise missiles and UAVs than to restrict ballistic missiles. MTCR members have not always agreed with each others’ interpretation of the MTCR guidelines and control lists concerning cruise missiles. Specifically, members have had different views on how to measure the range and payload of cruise missiles and UAVs, making it difficult to determine when a system has the technical characteristics that require more stringent export controls to apply under MTCR guidelines. For example, cruise missiles can take advantage of more fuel-efficient flight at higher altitudes to gain substantially longer ranges than manufacturers and exporting countries advertise. Even with the new definition of range that the MTCR adopted in 2002, different interpretations remain among members over whether particular cruise missiles could be modified to achieve greater range. In one case, the United States and France disagreed about France’s proposed sale of its Black Shaheen cruise missile to the United Arab Emirates, according to French and U.S. government officials and nonproliferation experts (see fig. 4). In a second case, members have raised questions about Russia’s assistance to India, a nonmember, to develop the Brahmos cruise missile project and called for further discussion of the system’s technical capabilities. In October 2002, we reported on other limitations that impede the ability of the multilateral export control regimes, including MTCR and the Wassenaar Arrangement, to achieve their nonproliferation goals. For example, we found that MTCR members may not share complete and timely information, such as members’ denied export licenses, in part because the regime lacks an electronic data system to send and retrieve such information. The Wassenaar Arrangement members share export license approval information but collect and aggregate it to a degree that it cannot be used constructively. Both MTCR and the Wassenaar Arrangement use a consensus process that makes decision-making difficult and lack a means to enforce compliance with members’ political commitments to regime principles. We recommended that the Secretary of State establish a strategy to work with other regime members to enhance the effectiveness of the regimes by implementing a number of steps, including (1) adopting an automated information-sharing system in MTCR to facilitate more timely information exchanges, (2) sharing greater and more detailed information on approved exports of sensitive transfers to nonmember countries, (3) assessing alternative processes for reaching decisions, and (4) evaluating means for encouraging greater adherence to regime commitments. U.S. ICE and Commerce authorities have had difficulty identifying and tracking dual-use exports in transit that could be useful for cruise missiles and UAV development because such exports have legitimate civilian uses. As a result, U.S. enforcement tools have been limited in conducting investigations of suspected exports of illicit cruise missile and UAV dual- use items. Moreover, a gap in U.S. export control regulations could allow missile proliferators to acquire unlisted American cruise missile or UAV dual-use technology without violating the regulations. ICE officials said that it is difficult to conduct Customs enforcement investigations of possible export violations concerning certain cruise missile or UAV dual-use technologies. First, parts or components that are not on control lists are often similar to controlled parts or components, enabling exporters to circumvent the controls entirely, according to ICE officials. Because ICE inspectors are not engineers and shipping documents do not indicate the end product for the component being shipped, determining what the components can do is problematic. Second, countries seek smaller UAVs than those controlled. ICE officials said that buyers who cannot get advanced UAVs instead try to obtain model airplanes and model airplane parts, which might substitute for UAVs and their components. Third, ICE officials noted that circumventing the export control law is easy because the U.S. government has to prove both the exporter’s knowledge of the law and the intent to violate it. As of October 2003, Customs had completed two investigations related to UAVs, and had nine others open, as well as one open case related to cruise missiles. The two cases related to UAVs, both involving suspected diversions of items to Pakistan, resulted in one finding of no violation and one guilty plea. As a result, two defendants received prison terms of 24 and 30 months, respectively, with 2 years of supervised release. Commerce officials also indicated that there are challenges to enforcing export controls on dual-use goods related to cruise missile or UAV development. They stated that some investigations were not pursued because the technical parameters of the items exported were below the export control thresholds for missile technology. Nonetheless, Commerce officials noted that exported items below these parameters could be changed after export by adding components to improve the technology. For example, software exported without a license could receive an upgrade card that would make it an MTCR-controlled item. As of October 2003, Commerce had completed 116 investigations related to missile proliferation, but not specifically to cruise missiles or UAVs. Furthermore, the Secretary of Commerce in 2003 identified other challenges for the enforcement of controls on dual-use goods related to missile development. First, it is difficult to detect and investigate cases under the “knowledge” standard set by the “catch-all” provision. Second, some countries do not yet have catch-all laws or have different standards for catch-all, which complicates law enforcement cooperation. Third, identifying illegal exports and re-exports of missile-related goods requires significant resources. Nonetheless, the Secretary stated that United States has the ability to effectively enforce end-use and end-user controls on missile technology and that multilateral controls provide a strong framework for cooperative enforcement efforts overseas. A gap in U.S. export control regulations could allow missile proliferators or terrorists to acquire U.S. cruise missile or UAV dual-use technology without violating U.S. export control laws or regulations. The Export Administration Regulations (EAR) establish license requirements for items not listed in the regulations on the Commerce Control List, as well as for items that are listed. License requirements for items not listed are based on the exporter’s knowledge of the end user or end uses to which the item would be applied. For missile controls, an exporter may not export or re- export an item if the exporter knows that the item (1) is destined to or for a missile project listed in the regulations or (2) will be used in the design, development, production or use of missiles in or by a country listed in the regulations, whether or not that use involves a listed project. However, this condition on exports does not apply to activities that are unrelated to the 12 projects or 20 countries listed in the regulation. This section of the regulations was intended to apply to national missile proliferation programs, according to Commerce officials, and not to nonstate actors such as certain terrorist organizations or individuals. Finally, this section of the regulations does not apply to exports of dual-use items for missiles with less than 300 kilometers range and 500 kilograms payload because the regulatory definition of a missile excludes missiles below MTCR range and payload thresholds. However, such missiles with lesser range or payload could be sufficient for terrorist purposes. The case of a New Zealand citizen obtaining unlisted dual-use items to develop a cruise missile illustrates this gap in U.S. export controls. In June 2003 this individual reported that he purchased American components necessary to construct a cruise missile to illustrate how a terrorist could do so. Because the New Zealand citizen is not on a list of prohibited missile projects, terrorist countries, or terrorists, there is no specific export control requirement that an American exporter apply to the U.S. government for a review of the items before export, according to Commerce officials. According to Commerce licensing and enforcement officials, they have no legal recourse in this or similar cases, as there is no violation of U.S. export control law or regulations. The Commerce officials said that they would need to wait until the New Zealander used the weapon improperly before action under export control law or regulations could be taken. It would be the New Zealand government’s responsibility to address any illicit action resulting from such transfers of U.S. items, according to other Commerce officials. One department official stated that not all export control loopholes can be closed and that U.S. export controls cannot fix defects in other countries’ laws. Another Commerce official explained that current catch-all controls assume that terrorists would not attempt to acquire illicit arms in friendly countries, such as NATO allies. Commerce officials explained that proliferators seeking a rudimentary, rather than state-of-the-art cruise missile, would be able to construct such a weapon of components not listed on the Commerce Control List. For these items, Commerce must directly link the items to a WMD program to apply the catch-all controls; otherwise, no action can be taken, according to the officials. They remarked that catch-all controls were added to give licensing officers more flexibility in reviewing items. However, exporters adept in covering up direct links to a WMD program could continue to divert dual-use missile-related items, according to the Commerce officials. The United States has other nonproliferation tools to address cruise missile and UAV proliferation: diplomacy, sanctions, and interdiction of illicit shipments of items. The United States used diplomacy to address suspected cases of proliferation of cruise missiles and UAVs in at least 14 cases. U.S. efforts to forestall transfers of items succeeded in about one- third of these cases. The United States issued diplomatic action in at least 14 cases to inform foreign governments of proposed or actual transfers of cruise missile or UAV items. The U.S. government successfully halted transfers in five cases, did not successfully halt a transfer in two cases, did not know the results of its actions or action was still in process in six cases, and claimed partial success in one other case. Of nine cases involving MTCR items, six of the nine countries demarched were MTCR members and three were not. Under several U.S. laws that authorize the use of sanctions when the U.S. government determines that missile proliferation has occurred, the U.S. government used sanctions twice between 1996 and 2002 for violations related to exports of cruise missiles. In these two cases, the United States sanctioned a total of 18 entities in 5 countries. However, a State official did not know whether the entities ceased their proliferation activities as a result. Although the Acting Deputy Assistant Secretary of State for Nonproliferation identified interdiction as one tool used to address proliferation of cruise missile and UAV technology, U.S. and foreign government officials knew of few cases of governments’ interdicting such shipments. To date, the United States reported using interdiction once to stop illicit shipments of cruise missile or UAV technology. ICE officials referred to only one case of an interdiction of a propeller for a Predator UAV destined for Afghanistan. Commerce officials knew of no cases where Commerce had been involved in interdiction of cruise missile or UAV dual-use technology. Foreign governments reported no known cases of interdiction of suspect cruise missile or UAV technology exports. The U.S. government announced discussions in June 2003 with 11 foreign governments to increase the use of interdiction against all forms of WMD and missile proliferation. A meeting in Paris of these governments participating in the Proliferation Security Initiative announced a statement of interdiction principles on September 4, 2003. These principles include a commitment to undertake effective measures for interdicting the transfer or transport of WMD, delivery systems, and related materials to and from states and nonstate actors of proliferation concern; adopt streamlined procedures for rapid exchange of relevant information concerning suspected proliferation activity; and review and work to strengthen relevant national legal authorities and international law and frameworks to accomplish these objectives. Post-shipment verification (PSV) is a key end-use monitoring tool used by U.S. agencies to confirm that authorized recipients of U.S. technology both received transferred items and used them in accordance with conditions of the transfer. However, State and Commerce seldom conduct PSVs of transferred cruise missiles, UAVs, and related items; State’s program does not monitor compliance with conditions when checks are made. Furthermore, Defense officials were not aware of any end-use monitoring for more than 500 cruise missiles transferred through government-to- government channels, although officials are considering conducting such checks in the future. Similarly, other supplier countries place conditions on transfers, but few reported conducting end-use monitoring once items were exported. The Arms Control Export Act, as amended in 1996, requires, to the extent practicable, that end-use monitoring programs provide reasonable assurance that recipients comply with the requirements imposed by the U.S. government in the use, transfer, and security of defense articles and services. In addition, monitoring programs are to provide assurances that defense articles and services are used for the purposes for which they are provided. Accordingly, under State’s monitoring effort, known as the Blue Lantern program, State conducts end-use monitoring of direct commercial sales of defense articles and services, including cruise missiles, UAVs, and related technology. According to Blue Lantern program guidance, a PSV is the only means available to verify compliance with license conditions once an item is exported. Specifically, a PSV is used (1) to confirm whether licensed defense goods or services exported from the United States actually have been received by the party named on the license and (2) to determine whether those goods have been or are being used in accordance with the provisions of that license. Despite these requirements, we found that State did not use PSVs to assess compliance with cruise missile or UAV licenses having conditions limiting how the item may be used. These licenses included items deemed significant by State regulations. Based on State licensing data, we identified 786 licenses for cruise missiles, UAVs, or related items from fiscal years 1998 through 2002. Of these, 480 (61 percent) were licenses with conditions, while 306 (39 percent) were licenses without conditions. These 786 licenses included one for a complete state-of-the-art Predator B UAV (see fig. 5), and 27 for supporting Predator technical data, defense services, and parts. The licenses also included 7 for supporting technical data, defense services, and parts for the highly advanced Global Hawk UAV. We found that State did not conduct PSVs for any of the 480 licenses with conditions and conducted PSVs on only 4 of 306 licenses approved without conditions. Each license reviewed through the post-shipment checks involved transferred UAV-related components and equipment. Three of the licenses receiving checks resulted in unfavorable determinations because a company made inappropriate end-use declarations or the end user could not confirm that it had received or ordered the items. State added that it has many other sources of information besides PSV checks on the misuse and diversion of exported articles. These sources include intelligence reporting, law enforcement actions, embassy reporting, and disclosures of U.S. companies. A State licensing official stated that few post-shipment Blue Lantern checks have been conducted for cruise missiles, UAVs, and related items because many are destined for well-known end users in friendly countries. However, over fiscal years 1998 through 2002, 129 of the 786 licenses authorized the transfer of cruise missile and UAV-related items to countries such as Egypt, Israel, and India. These countries are not MTCR members, which indicates that they might pose a higher risk of diversion. In addition, over the last 4 years, State’s annual end-use monitoring reports to Congress recognized an increase in the incidence of West European- based intermediaries involved in suspicious transactions. State noted in its 2001 and 2002 reports that 23 and 26 percent, respectively, of unfavorable Blue Lantern checks for all munitions items involved possible transshipments through allied countries in Europe. In contrast to State’s guidance, State officials said that the Blue Lantern program was never intended to verify license condition provisions on the transfer of munitions such as cruise missile and UAV-related items. Instead, State officials explained that the program seeks to make certain only that licensed items are being used at the proper destination and by the proper end user. A State official further said that the compliance office is not staffed to assess compliance with license conditions and has not been managed to accomplish such a task. In commenting on a draft of this report, State emphasized the importance of Blue Lantern pre-license checks in verifying controls over the end user and end use of exported items and said that we did not include such checks in our analysis. We reviewed the 786 cruise missile and UAV licenses to determine how many had received Blue Lantern pre-license checks, a possible mitigating factor reducing the need to conduct a PSV. We found that only 6 of the 786 licenses from fiscal years 1998 through 2002 that State provided us had been selected for pre-license checks. Of these, four received favorable results, one received an unfavorable result, and one had no action taken. Under the Arms Control Export Act, as amended in 1996, the Department of Defense also is required to monitor defense exports to verify that foreign entities use and control U.S. items in accordance with conditions. The amended law requires an end-use monitoring program for defense articles and services transferred through the Foreign Military Sales program. Monitoring programs, to the extent practicable, are required to provide reasonable assurances that defense articles and services are being used for the purposes for which they are provided. The Defense Security Cooperation Agency (DSCA) is the principal organization through which Defense carries out its security assistance responsibilities, including administering the Foreign Military Sales program. Under this program, the United States transfers complete weapons systems, defense services, and related technology to eligible foreign governments and international organizations from Defense stocks or through Defense-managed contracts. Bilateral agreements contain the terms and conditions of the sale and serve as the equivalent of an export license issued by State or Commerce. From fiscal years 1998 through 2002, DSCA approved 37 agreements for the transfer of more than 500 cruise missiles and related items, as well as one transfer of UAV training software. The agreements authorized the transfer of Tomahawk land-attack cruise missiles, Standoff land-attack missiles, and Harpoon anti-ship cruise missiles, as well as supporting equipment such as launch tubes, training missiles, and spare parts. Approximately 30 percent of cruise missile transfers were destined for non-MTCR countries. Figure 6 shows the destinations of transferred cruise missiles. Defense’s end-use monitoring program, called Golden Sentry, has conducted no end-use checks related to cruise missile or UAV transfers, according to the program director. DSCA guidance states that government- to-government transfers of defense items, including cruise missiles, are to receive routine end-use monitoring. Under routine monitoring, Security Assistance Officers and/or military department representatives account for the end use of defense articles through personal observation in the course of other assigned duties. However, the program director stated that he was unaware of any end-use monitoring checks, routine or otherwise, for transferred U.S. cruise missiles over the period of our review. In addition, a past GAO report found problems with monitoring of defense items and recommended that DSCA issue specific guidance to field personnel on activities that need to be performed for the routine observation of defense items. Nonetheless, Defense’s Golden Sentry monitoring program is not yet fully implemented, despite the 1996 legal requirement to create such an end-use monitoring program. DSCA issued program guidance in December 2002 that identified the specific responsibilities for new end-use monitoring activities. In addition, as of November 2003, DSCA was conducting visits to Foreign Military Sales recipient countries to determine the level of monitoring needed and was identifying weapons and technologies that may require more stringent end-use monitoring. The program director stated that he is considering adding cruise missiles and UAVs to a list of weapon systems that receive more comprehensive monitoring. The Export Administration Act, as amended, provides the Department of Commerce with the authority to enforce dual-use controls. Under the act, Commerce is authorized to conduct PSV visits outside the United States of dual-use exports. The Export Administration Regulations indicate that a transaction authorized under an export license may be further limited by conditions that appear on the license, including a condition that stipulates the need to conduct a PSV. Commerce can conduct a PSV by applying a condition to a license that requires U.S. mission staff residing in the recipient country to conduct a PSV, or it can send a safeguards verification team from Commerce headquarters to the country to conduct a PSV. Based on Commerce licensing data, we found that Commerce issued 2,490 dual-use licenses between fiscal years 1998 and 2002 for items that could be useful in developing cruise missiles or UAVs. Of these, Commerce selected 2 percent of the licenses, or 52 cases, for a PSV visit and completed visits to 1 percent of the licenses. Specifically, Commerce designated PSVs as a license condition for 28 licenses, and completed 5. Commerce designated PSVs as part of a safeguards team for 24 cases, and completed all of them. Of these 24 checks, 23 resulted in favorable determinations, while 1 was unfavorable. Commerce guidance for selecting PSVs and pre-license checks establish criteria based on technologies and countries that require a higher priority for conducting PSVs and pre-license checks. The guidance identifies 19 specific missile technology categories from the Commerce Control List involving particularly sensitive commodities as choke points for the development of missiles and indicating a priority for PSV or pre-license selection. For example, items such as software and source code to improve inertial navigation systems, as well as lightweight turbojet and turbofan engines, are included as choke point missile technologies. In addition, the guidance identifies 20 countries of missile diversion concern that may also warrant a pre-license check or PSV. The guidance further identifies 12 specific countries or destinations that have been used repeatedly, and are likely to be used again, as conduits for diversions of sensitive dual-use commodities or technology. The guidance states that other factors might mitigate the need to select a license for a PSV. We applied Commerce’s guidelines to the 2,490 cruise missile or UAV- related licenses and identified 20 that met the criteria to receive a PSV or a pre-license check. However, Commerce selected only 2 of these 20 licenses. All 20 licenses were for choke point missile technology useful for cruise missile or UAV development. Some of these licenses were for countries of missile diversion concern, such as India, while others were for transshipment countries, such as Singapore. Figure 7 shows the destinations for items in the 20 licenses and the percentage of licenses for each destination. We found that Commerce selected 2 of the 20 licenses for a PSV. One PSV resulted in a favorable determination, while the other had not been completed at the time of our review. Even though the 20 licenses met guidance criteria, few of these licenses had been selected for PSVs. A Commerce official explained that licenses might not be selected for a PSV because many factors might mitigate the need for a PSV for a particular license even though it meets the selection criteria. However, Commerce officials could not explain which factors lessened the need for a PSV for the remaining 18 licenses. Other supplier countries have established export control laws and regulations, which also place conditions on transfers and can authorize agencies to conduct end-use monitoring of sensitive items. For example, government officials from the United Kingdom, France, and Italy stated that their respective governments might add conditions for cruise missile and UAV-related items, as well as for other exports. While national export laws authorize end-use monitoring, none of the foreign government officials reported any PSV visits for cruise missile or UAV-related items. The national export control systems of other cruise missile and UAV supplier countries that responded to our request for information apply controls differently from the United States for missile-related transfers. Government officials in France, Italy, and the United Kingdom stated that their respective governments generally do not verify conditions on cruise missile and UAV transfers and conduct few post-shipment verification visits of such exports. The South African government was the only additional supplier country responding to a written request for information that reported it regularly requires and conducts PSVs on cruise missile and UAV transfers. Officials in the United Kingdom stated that the U.K. government seeks to ensure compliance with license conditions, but it has no institutional system for conducting PSVs for British exports. Although defense attaches keep their eyes open for cases of misuse of an item, the officials did not know whether any PSV visits had been done for transfers of British cruise missiles or UAVs. A U.K. government official said that occasionally embassy officials may conduct PSVs on other British equipment. For example, a PSV may be undertaken to confirm that British tanks are not being used by Israel to conduct operations in Gaza. However, the official added that such actions are neither required nor routine. Italian government officials stated that all armament transfers include conditions that the end user cannot retransfer to other countries or users without prior permission from the government. Additionally, some export licenses require an import delivery certificate as a condition to certify that an item has been imported. For those licenses, the government of Italy allows firms fixed periods of time to provide required documents. If the recipient does not send a required delivery certificate, then a PSV would be conducted to verify whether the end user received the items. According to French officials, France does not conduct explicit PSV visits. Instead, its officials observe end-use conditions during technical or military-to-military contacts. Specifically, French officials stated that when missiles or any other highly technological goods are sold contact between the government and the recipient provides opportunities to ensure the disposition of the exported item. According to South African government officials, requirements for PSV visits may be applied to licenses for cruise missile and UAV-related technology. Furthermore, South Africa conducts regular end-use verifications to selected end users of non-MTCR countries and may initiate other ad hoc visits as required by the South African control authorities. In addition, government-to-government agreements require end-use certificates containing delivery verification information and include authorizations for end-use verification visits, as well as non-retransfer, non-modification, and non-reproduction clauses. South African government officials also stated that each clause must be fully verified and authenticated. The continued proliferation of cruise missiles and UAVs poses a growing threat to the United States, its forces overseas, and its allies. Most countries already possess cruise missiles, UAVs, or related technology, and many are expected to develop or obtain more sophisticated systems in the future. The dual-use nature of many of the components of cruise missiles and UAVs also raises the prospect that terrorists could develop rudimentary systems that could pose additional security threats to the United States. Since this technology is already widely available throughout the world, the United States works in concert with other countries through multilateral export control regimes to better control the sale of cruise missiles, UAVs, and related technologies. Even though the effectiveness of these regimes is limited in what they can accomplish, the United States could achieve additional success in this area by adopting our previous recommendations to improve the regimes’ effectiveness. U.S. export controls may not be sufficient to prevent cruise missile and UAV proliferation and ensure compliance with license conditions. Because some key dual-use components can be acquired without an export license, it is difficult for the export control system to limit or even track their use. Moreover, current U.S. export controls may not prevent proliferation by nonstate actors, such as certain terrorists, who operate in countries that are not currently restricted under missile proliferation regulations. Furthermore, the U.S. government seldom uses its end-use monitoring programs to verify compliance with the conditions placed on items that could be used to develop cruise missiles or UAVs. Thus, the U.S. government does not have sufficient information to know whether recipients of these exports are effectively safeguarding equipment and technology in ways that protect U.S. national security and nonproliferation interests. The challenges to U.S. nonproliferation efforts in this area, coupled with the absence of end-use monitoring programs by several foreign governments for their exports of cruise missiles or UAVs, raise questions about how nonproliferation tools are keeping pace with the changing threat. A gap in dual-use export control regulations could enable individuals in most countries of the world to legally obtain without any U.S. government review U.S. dual-use items not on the Commerce Control List to help make a cruise missile or UAV. Consequently, we recommend that the Secretary of Commerce assess and report to the Committee on Government Reform on the adequacy of the Export Administration Regulations’ catch-all provision to address missile proliferation by nonstate actors. This assessment should indicate ways the provision might be modified. Because the departments have conducted so few PSV visits to monitor compliance with U.S. government export conditions on transfers of cruise missiles, UAVs and related dual-use technology, the extent of the compliance problem is unknown. While we recognize that there is no established or required number of PSV visits that should be completed, the small number completed does not allow the United States to determine the nature and extent of compliance with these conditions. Thus, we recommend that the Secretaries of State, Commerce, and Defense, as a first step, each complete a comprehensive assessment of cruise missile, UAV, and related dual-use technology transfers to determine whether U.S. exporters and foreign end users are complying with the conditions on the transfers. As part of the assessment, the departments should also conduct additional PSV visits on a sample of cruise missile and UAV licenses. This assessment would allow the departments to gain critical information that would allow them to better balance potential proliferation risks of various technologies with available resources for conducting future PSV visits. We provided a draft of this report to the Secretaries of Commerce, Defense, and State for their review and comment. We received written comments from Commerce, Defense, and State that are reprinted in appendixes II, III, and IV. DOD and State also provided us with technical comments, which we incorporated as appropriate. Commerce did not agree that the limited scope of the current catch-all provision should be called a gap in U.S. regulations but agreed to review whether the existing catch-all provision sufficiently protects U.S. national security interests. Commerce said that it believes that the export control system effectively controls items of greatest significance for cruise missiles and UAVs and are on the Commerce Control List. It stated that our report is ambiguous as to whether the gap relates to items listed on the control list or to items that are not listed, as they are not considered as sensitive for missile proliferation reasons. Commerce also stated that we should explain the basis for suggesting that noncontrolled items are sensitive and should be placed on the MTCR control list, if that is our position. Our references to the gap in the regulations refer to dual-use items that are not listed on the Commerce Control List and we have made changes to the draft to clarify this point. Furthermore, we are not suggesting that unlisted items should be added to the MTCR control list to deal with the issue we identified in the New Zealand example. As we recommend, the vehicle to address this gap would be an expansion of Commerce’s catch-all provision whereby license reviews would be required when the exporter knows or has reason to know that the items might be used by nonstate actors for missile proliferation purposes. In commenting on our draft report’s recommendation to require an export license review for any item that an exporter knows or has reason to know would be used to develop or design a cruise missile or UAV of any range, Commerce agreed to review whether the existing provision sufficiently protects U.S. national security interests. We have modified our recommendation to reflect the need for an assessment of the catch-all provision’s scope and possible ways to modify the provision to address the gap. State disagreed with our findings and conclusions concerning its end-use monitoring program. State said that our report was inaccurate in suggesting that State does not monitor exports to verify compliance with export authorizations and noted that we did not discuss the importance of pre-license checks to verify end use and end user restrictions. It said that our report was misleading and inaccurate to suggest that State does not monitor exports to verify compliance with export authorizations. State said that the most important restrictions placed on export authorizations involve controls over the end user and the end use of the article being exported. State also said that it uses many tools in the export licensing process to verify these restrictions and that the Blue Lantern program’s pre- and post-license checks are only one of these tools. State said that pre-license checks verify the bond fides of end users, as well as the receipt and appropriate end use of defense articles and services, including UAV- and missile-related technologies. It also questioned why our analysis did not include pre-license checks as part of State’s efforts to ensure compliance with arms export regulations. We agree that pre-license checks are critical to ensure that licenses are issued to legitimate, reliable entities and for specified programs or end uses in accordance with national security and foreign policy goals. We also agree that they augment controls and checks used during the licensing process to determine the legitimacy of the parties involved and the appropriate end use of the export prior to license approval. However, such checks cannot confirm the appropriate end user or end use of an item after it has been shipped and received by the recipient. Regarding other tools in the export licensing process to verify conditions, we asked State for additional information that would indicate what other actions, besides PSV checks, State took. State officials said that some license conditions required follow-up action—such as forms or reports—either by State officials, exporters, or end users. We asked for examples of such follow-up action related to licenses for cruise missiles, UAVs, or related technology. A State official said that, after querying the relevant licensing teams, State officials did not identify any licenses requiring follow-up action and that there is no system, formal or otherwise, that would document follow-up actions that had been taken. In response to State’s comments, we added information on Blue Lantern pre-license checks to the report, information that further demonstrates the limited monitoring that State conducts on cruise missile and UAV-related transfers. We reviewed the 786 cruise missile and UAV licenses to determine how many had received Blue Lantern pre-license checks, a possible mitigating factor reducing the need to conduct a PSV. These included 129 licenses to non-MTCR countries, such as Egypt, Israel, and India, which present a higher risk of misuse or diversion. We found that only 6 of the 786 licenses that State provided us had been selected for pre-license checks. Of these, 4 received favorable results, 1 received an unfavorable result, and 1 had no action taken. Commerce and DOD partially agreed with our second recommendation to complete a comprehensive assessment of cruise missile, UAV and related dual-use technology transfers. However, both departments raised concerns over the resources needed to conduct such a comprehensive assessment and sought further definition of the scope of the transfers to be assessed as the basis for interagency action and additional resources for monitoring. DOD suggested that a random sample of cases could achieve results equivalent to that of a comprehensive assessment. It agreed to conduct a greater number of PSVs in order to (1) provide the U.S. government with a high level of confidence over time that exporters and end users are complying with export license conditions and (2) allow the U.S. government to determine whether adequate resources are devoted to license compliance issues. We clarified our recommendation so that a comprehensive assessment could include a sampling methodology so long as it provided each agency with a high level of confidence that the sample selected accurately demonstrated the nature and extent of compliance with conditions. State disagreed with our recommendation and said that the absence of evidence in our report of misuse or diversion does not warrant such an extensive effort. Nonetheless, State said that in conjunction with steps taken to improve the targeting of Blue Lantern checks and increase the number conducted annually, it would pay special attention to the need for additional pre- and post-shipment checks for cruise missile- and UAV-related technologies. Since State conducted no PSV checks for any of the 480 licenses for cruise missile or UAV-related licenses with conditions that we reviewed and only 6 pre-license checks for the 786 licenses, the need for State to assess its monitoring over cruise missile and UAV licenses is a recommendation we strongly reaffirm. As arranged with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after the date of this letter. At that time, we will send copies of this report to appropriate congressional committees and to the Secretaries of Commerce, Defense, and State. Copies will be made available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8979 or at [email protected]. A GAO contact and staff acknowledgments are listed in appendix V. To determine the nature and extent of cruise missile and UAV proliferation, we reviewed documents and studies of the Departments of Commerce, Defense, Homeland Security, and State, the intelligence community, and nonproliferation and export controls specialists in academia and the defense industry. These included the Unclassified Report to Congress on the Acquisition of Technology Relating to Weapons of Mass Destruction and Advanced Conventional Munitions, 1 January through 30 June 2002; the Director of Central Intelligence worldwide threat briefing on The Worldwide Threat in 2003: Evolving Dangers in a Complex World 11 February 2003; and the DOD UAV Roadmap for 2000 to 2025 and 2002 to 2025. We also reviewed databases of the UAV Forum and UVONLINE. Also, we reviewed plenary, working group, and information exchange documents of the MTCR. We met with officials of the Departments of Commerce, Defense, Homeland Security, and State, the intelligence community, and with nonproliferation and export controls specialists in academia in Washington, D.C., and officials of the National Air Intelligence Center in Dayton, Ohio. We also met with representatives of private companies Adroit Systems Inc., EDO, Boeing, MBDA, Lockheed Martin, and with the industry associations National Defense Industrial Association (NDIA) and the Aerospace Industries Association in Washington, D.C. In addition, we received a written response from NDIA to a list of detailed questions. Also, we met with representatives of the Defense Manufacturing Association, SBAC, Goodrich, BAE Systems, and MBDA in London, United Kingdom; and of the European Unmanned Vehicle Systems Association (EURO UVS) in Paris, France. In addition, we attended two conferences of the Association for Unmanned Vehicle Systems International (AUVSI) in Washington, D.C., and Baltimore, Maryland. To examine how the U.S. and other governments have addressed proliferation of cruise missile and UAV risks, we analyzed the documents and studies noted above and met with officials and representatives of the previously mentioned governments and nonproliferation and export controls specialists in academia. We also reviewed relevant documents and data to determine how the U.S. and other governments have used export controls, diplomacy, interdiction, and other policy tools. Based on this information, we conducted analyses to determine how each tool had been employed and with what results. To evaluate the end-use controls used by the U.S. and other governments, we obtained documents and met with officials from the Departments of Commerce, Defense, and State. We also reviewed arms transfer data from DOD and export licensing data from State and Commerce databases to assess what cruise missile and UAV technology the United States exported, how the U.S. government selected licenses to receive post- shipment monitoring, and how it applied end-use post-shipment controls. Moreover, we reviewed applicable U.S. export control laws and regulations. We performed qualitative and quantitative analyses of selected export licenses to determine the extent and frequency of applied license conditions and end-use checks related to cruise missile and UAV transfers. To determine the completeness and accuracy of the Defense and State data sets, we reviewed related documentation, interviewed knowledgeable agency officials, and reviewed internal controls. The State database system is not designed to identify all cruise missile or UAV commodities transferred. Therefore, the team developed a list of search terms based on consultations with State officials concerning which terms would likely capture all transfers involving cruise missiles, UAVs, and related technology. State provided the criteria we used to determine what State- licensed exports were cruise missile or UAV-related. State officials queried their licensing database to search for specific category codes and 12 keywords. The resulting report that State provided to us contained 400 pages with 1,300 entries. While we have high confidence that our analysis allowed us to capture most of the relevant cases, it is possible that a few relevant State cases might have been missed. We also assessed the reliability of the Commerce data by performing electronic testing of required data elements and by interviewing agency officials knowledgeable about the data. We determined that the data elements were sufficiently reliable for the purposes of this engagement. We also interviewed officials of the governments of France, Italy, and the United Kingdom, and met with representatives of the point of contact for the MTCR in Paris, France. In addition, we received written responses to questions we provided to the governments of Israel, Japan, South Africa, and Switzerland. Russia and Canada provided a response too late to be included in this report. We requested the same information from the government of Germany, but received no response. Our ability to address two objectives was impaired by a Department of State delay in assisting our efforts to collect responses to written questions from foreign governments. State agreed to facilitate this effort 4 months after our initial request for assistance and only after we reduced the number of countries to receive our questions from 16 to 7 and reduced the number of questions. Given this delay, governments had less time to respond to our questions than we had originally planned. Thus, we could not fully assess how other governments address the proliferation risks of cruise missiles, UAVs, and related technology and apply end-use controls on their exports of these items. We performed our work from October 2002 to November 2003 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Department of Commerce letter dated November 14, 2003. 1. Commerce said that our report does not distinguish among the varying threats posed by different types of cruise missiles and UAVs. Our report does make distinctions between the threats posed by anti-ship cruise missiles to U.S. naval forces, land-attack cruise missiles to the U.S. homeland and forces deployed overseas, and UAVs as potential delivery systems for chemical and biological weapons. As our report stated, the potential for terrorist groups to produce or acquire rudimentary cruise missiles or small UAVs using unlisted dual-use items is an emerging threat that needs to be better addressed. 2. Commerce said that agreement was finalized at the September 2003 MTCR Plenary to add a new category of UAVs to the MTCR control list. We have added information to the report to take this into account. 3. Commerce said that our report does not discuss action taken at the September 2003 MTCR Plenary to include a catch-all provision in the regime guidelines that could strengthen MTCR disciplines and address some of the concerns of our report. While helpful, the practical impact of this change is negligible. Nearly all MTCR members currently have catch-all controls in their national export control authority. Furthermore, as Commerce pointed out, the U.S. catch-all controls have limited scope and do not address the type of situation presented in the New Zealand example. 4. We believe that our explanation was clear as to how we applied Commerce’s guidance to select licenses that met Commerce’s listed criteria for receiving a PSV. As clearly noted in our report, we first started with the 2,490 dual-use licenses with commodity categories that Commerce had identified as relevant to cruise missile and UAV items. Second, we selected those licenses having only commodity categories identified in Commerce guidance as chokepoint technologies. Third, we matched these licenses to a recipient country identified as a country of missile proliferation concern or as a conduit country. This analysis resulted in 20 licenses. When we found that two of the 20 licenses we identified had been selected for a PSV, we asked Commerce officials to explain which of the other variables (information about the parties to the transaction, proposed end use, previous licensing history, etc.) mitigated the need for a PSV. As we reported, Commerce officials could not explain which factors lessened the need for a PSV for the remaining 18 licenses. 5. Commerce stated that there is ambiguity in our report as to whether the gap relates to items listed on the Commerce Control List or to items that are not listed because they are not considered as sensitive for missile proliferation reasons. Our references to the gap in the regulations refer to dual-use items that are not listed on the Commerce Control List. We have made changes to the draft to clarify this point. 6. Commerce stated that if it is GAO’s position that noncontrolled items are sensitive and should be placed on the MTCR control list, then we should explain the basis for this position. We are not suggesting that unlisted items should be added to the MTCR control list to deal with the issue we identified in the New Zealand example. As indicated in our recommendation, the vehicle to address this gap would be an expansion of Commerce’s catch-all provision whereby license reviews would be required when the exporter knows or has reason to know that items not on the Commerce Control List might be used by nonstate actors for missile proliferation purposes. 7. Commerce states that the United States and MTCR members effectively control the items of greatest significance for cruise missiles and UAVs that pose concerns for U.S. national security. We agree that MTCR covers items of greatest significance for cruise missiles and UAVs that pose concerns posed by national missile programs. However, Commerce needs to recognize the potential for nonstate actors, particularly terrorists, to legally acquire unlisted items for use in missile proliferation. 8. Commerce acknowledges that its enforcement authority is limited concerning items not listed on the Commerce Control List and entities not named on the terrorist lists. Nonetheless, it asserts that it could take specific actions if it learned that U.S. items had been shipped to proliferators or terrorists that were developing weapons with these components. However, it is not clear how this information would come to Commerce’s attention because current regulations do not require, or inform, an exporter to seek a license review in this type of situation. 9. Commerce agrees to consider whether the catch-all provision sufficiently protects U.S. national security interests. We agree that such a review in consultation with the Technical Advisory Committees and interagency community would be an important first step in identifying the sufficiency of the provision to cover nonstate actors and ways to modify it to address the gap. Consequently, we have modified our recommendation accordingly. 10. The gap that we identified in our report is in the catch-all provisions. We are not suggesting that additional items be added to the control lists. Currently, the catch-all regulations require an exporter to submit a license application when he knows or has reason to know that an unlisted item would be used for missile proliferation purposes. However, this provision applies only to specific missile proliferation projects or countries identified on a narrow list in the regulations. The New Zealand citizen was not covered under the catch-all provisions. The following are GAO’s comments on the Department of State letter dated November 19, 2003. 1. State said that it has conducted over 1,200 Blue Lantern checks on exports of all types and developed derogatory information in almost 200 cases over the past 3 years. However, these checks and cases involved both pre-license checks and PSVs and included more than cruise missile or UAV items, according to State’s most recent end-use monitoring report. For example, 428 checks initiated by State in fiscal year 2002—of which 50 checks resulted in unfavorable determinations—included firearms and ammunition, electronics and communications equipment, aircraft spare parts, bombs, spare parts for tanks and military vehicles, and night vision equipment. 2. State also said that the Blue Lantern program (1) effectively verifies the end use and end users of export applications when questions arise, (2) has deterred diversions, (3) helped disrupt illicit supply networks, (4) helped State make informed licensing decisions, and (5) ensured exporter compliance with law and regulations. State added that the historical context of assessing the parties to the export weighs into every licensing decision and its importance cannot be discounted. We agree that the Blue Lantern program can have a positive impact when State has the information needed to allow it to act. This statement affirms our point that it is important to obtain such information through improved monitoring, particularly PSVs. However, given the limited number of either pre- or post-shipment Blue Lantern checks focused to date on cruise missile and UAV-related transfers, we question whether sufficient information has been obtained in this area. 3. State said that it was unclear why our report’s analysis excluded pre- license checks as part of State’s efforts to ensure compliance with arms export regulations. As noted above, we did ask for such information and learned that State conducted few pre-license checks for its cruise missile and UAV transfers. While we agree with State that pre-license checks are critical to providing assurances that licenses are issued to legitimate, reliable entities and for specified programs or end uses, they obviously cannot verify that exports are received by the legitimate end user or used in accordance with the terms of the license after shipment. We agree that seeking and receiving assurances prior to licensing and shipment is a critical function that might mitigate the need for a PSV check in many cases. However, State implies that pre- license and other actions of the licensing process mitigated the need to conduct PSV checks for all but 4 of its 786 licenses for cruise missile, UAV, or related technology. These included 129 licenses to non-MTCR countries, such as Egypt, Israel, and India. 4. State said that our report did not articulate the criteria we used to determine what exports are UAV-related. State provided the criteria we used to determine what State-licensed exports were cruise missile or UAV-related. State officials queried their licensing database to search for specific category codes and 12 keywords. The resulting report that State provided to us contained 400 pages with 1,300 entries. We have added this information to our Scope and Methodology section to clarify that State provided us with these criteria, the data generated from applying the criteria, and information on Blue Lantern pre-license and PSV checks for these licenses. In addition to the individual named above, Jeffrey D. Phillips, Stephen M. Lord, Claude Adrien, W. William Russell IV, Lynn Cothern, and Richard Seldin made key contributions to this report. | Cruise missiles and unmanned aerial vehicles (UAV) pose a growing threat to U.S. national security interests as accurate, inexpensive delivery systems for conventional, chemical, and biological weapons. GAO assessed (1) the tools the U.S. and foreign governments use to address proliferation risks posed by the sale of these items and (2) efforts to verify the end use of exported cruise missiles, UAVs, and related technology. The growing threat to U.S. national security of cruise missile and UAV proliferation is challenging the tools the United States has traditionally used. Multilateral export control regimes have expanded their lists of controlled technologies, but key countries of concern are not members. U.S. export control authorities find it increasingly difficult to limit or track unlisted dual-use items that can be acquired without an export license. Moreover, a gap in U.S. export control authority enables American companies to export certain dual-use items to recipients that are not associated with missile projects or countries listed in the regulations, even if the exporter knows the items might be used to develop cruise missiles or UAVs. American companies have in fact legally exported dualuse items with no U.S. government review to a New Zealand resident who bought the items to build a cruise missile. The U.S. government seldom uses its end-use monitoring programs to verify compliance with conditions placed on the use of cruise missile, UAV, or related technology exports. For example, State officials do not monitor exports to verify compliance with license conditions on missiles or other items, despite legal and regulatory requirements to do so. Defense has not used its end-use monitoring program initiated in 2002 to check the compliance of users of more than 500 cruise missiles exported between fiscal years 1998 and 2002. Commerce conducted visits to assess the end use of items for about 1 percent of the 2,490 missile-related licenses we reviewed. Thus, the U.S. government cannot be confident that recipients are effectively safeguarding equipment in ways that protect U.S. national security and nonproliferation interests. |